Digital Library[ Search Result ]
Overhead Analyses of Cache Replacement Policies and Region Mapping Replacement Policy
http://doi.org/10.5626/JOK.2024.51.10.849
Cache has been widely used to improve performance in systems with fast and slow devices. Various cache replacement policies have been studied, but these policies often come with computation and memory overheads. Unfortunately, many studies do not consider these overheads seriously and instead evaluate cache replacement policies based solely on cache hit rate. However, in modern computer systems, cache sizes are constantly increasing, making these overheads more significant. In order to provide a more comprehensive evaluation of cache replacement policies, we aim to consider both overheads and hit rates. In this study, we analyze the memory and computational overheads of popular cache replacement policies such as LRU, CLOCK, 2Q, ARC, and RAND. Additionally, we propose the Region Mapping (RM) policy, which has low memory and computational overheads. Furthermore, we introduce the RM2 policy, which improves hit rates by separating hot and cold data. Our experimental results show that the hit rates of the RM and RM2 policies are competitive with state-of-the-art policies. Moreover, policies with low memory overheads can reduce overall data access time by caching more data within a given cache size.
L2LRU: Learning-based Page Movement Policy for LRU Page Replacement Policy
http://doi.org/10.5626/JOK.2021.48.9.981
The LRU (least-recently used) page replacement policy has been designed to enhance the cache hit ratio by moving the page that is repeatedly accessed on the cache, to the head of the list. However, the LRU policy sometimes incurs a situation of system stall (or wait) because it requires lock-unlock commands to move each page. In this paper, we propose a new page replacement policy, called L2LRU(Learning-based Lock-free LRU), that determines whether to move or not a page by learning the reuse distance of the page with deep-learning techniques. Unlike LRU, L2LRU moves the page to the position with a high possibility of access in the near future. For evaluation, we implemented L2LRU based on trace-driven simulation and used Microsoft Research Cambridge Trace as the input of the simulation. The results clearly confirmed that L2LRU reduced the number of lock-unlock commands by up to 91% compared to the traditional LRU policy.
Search

Journal of KIISE
- ISSN : 2383-630X(Print)
- ISSN : 2383-6296(Electronic)
- KCI Accredited Journal
Editorial Office
- Tel. +82-2-588-9240
- Fax. +82-2-521-1352
- E-mail. chwoo@kiise.or.kr