英文摘要:
Most traditional caching algorithms rely on basic statistical data for content replacement, creating a significant performance discrepancy in comparison to offline optimal caching algorithms. However, due to improvements in CPU performance, cache strategies based on machine learning have been developed in recent years to enhance cache performance. These studies typically assume that requests are uniform in size, without taking into account the wide range of sizes that occur in real-world scenarios. Moreover, with network throughput continuing to increase as transmission speed approaches its limit, the factor of delayed hit becomes more critical. Consequently, the hit rate under delayed hit is not equivalent to traditional algorithms. In this paper, we consider the cache problem with variable object size and delayed hit and propose the algorithm named ARC-learning+. In summary, our research includes: (1) for the variable object size, this paper analyzes the impact of variable object size on cache performance. In the ARC-learning+ algorithm, a probabilistic admission policy is designed, which takes into account the object size. Experimental results on real data sets show that the modified cache replacement strategy is very close to the existing optimal cache algorithm. (2) for the scenario with delayed hit, the ranking function adopted by the ARC-learning+ algorithm comprehensively considers the delay consumed by the missing request and the popularity of the request. Experimental results show that the performance of the modified sorting function algorithm is better than other existing caching algorithms when the popularity changes quickly. It is also very close to the optimal delay performance of the existing algorithm under other prevalence changes.
|