FIFO and LRU Cache - Research Paper

Paper Type:  Research paper
Pages:  6
Wordcount:  1506 Words
Date:  2022-11-08

Introduction

Cache memories are small, high-speed buffer memories usually Static Random-Access Memory (SRAM) utilized in modern computer systems to hold temporarily those parts of the contents of primary memory such as programs that are currently in use (Smith 473). The cache memory is used in computer systems to compensate for the speed differential between main memory access time and processor logic. Data located in cache memory may be accessed faster than that stored in the main memory. Moreover, cache memories are mainly five to 10 times faster than main memory. Hence, a central processing unit (CPU) with a cache memory requires spending far less time waiting for instructions to be fetched and stored. The purpose of this paper is to discuss how First in First out (FIFO) and Least Recently Used (LRU) cache work as the intermediate between the CPU and other parts of the system.

Trust banner

Is your time best spent reading someone else’s essay? Get a 100% original essay FROM A CERTIFIED WRITER!

While the Input or Output (I/O) processor manages data transfers between auxiliary memory and main memory, the cache organization is concerned with the transfer of information between main memory and the CPU (Smith 477). The basic idea of cache organization is that by keeping the most frequently accessed information and instructions in the fast cache memory, the average memory access time will approach the access time of the cache. While the cache is only a small fraction of the size of main memory, a large portion of memory requests will be stored in the fast cache memory because of the locality of reference property of programs. Hence, the fundamental operation of the cache follows the following process. When the CPU needs to access memory, the cache scanned. If the word is found in the cache, it is read from the fast memory. However, if the word is not found, then the main memory is accessed. The blocks of words comprising the one just accessed is then transferred from main memory to the cache memory. In this manner, some information is transferred to cache to ensure future references to memory find the needed words in the fast cache memory.

The cache memory has various design aspects. The first aspect, the cache fetch algorithm, is utilized to decide when to bring information into the cache. This included fetching information on demand or fetching it before it is required. Second, cache placement algorithms are used to determine which set a piece of information will be placed. The third aspect is the line size. According to Smith (477), the line is the fixed-size unit of information transfer between the cache and main memory. Sometimes, it is referred to as a block. Choosing the line size is an essential part of the memory system design. The fourth aspect, replacement algorithms play a vital role in selecting the discarded line as optimally as possible. The various replacement algorithms include FIFO and LRU (Almheidat 30). Finally, the I/O is an additional source of references to information in memory. It is essential that an output request stream reference the most current values for the information transferred. Likewise, it is vital that input data be immediately reflected in all copies of those lines in memory.

Constraints to Design a Cache

Optimizing the design of a cache memory normally has four constraints (Smith 474). The first constraint is increasing the hit ratio. The hit ratio is the rate at which memory references are located in the cache memory. Maximizing the hit rate leads to the performance benefit obtained from caching. Second, decreasing the access time ensures that accessing a particular word in the cache memory takes a short time as possible. Various ways can be used to reduce access time such as minimizing the latency of caches by reducing their size or by increasing their bandwidth (Smith 474). The third constraint is minimizing the delay because of a miss. Mainly, misses are unavoidable. Nevertheless, if the amount of time it takes to handle a miss decreased, then one can attain better processor performance. Decreasing the delay of a miss can be achieved by maximizing the hit rate and by applying diverse optimizations such as the critical words first. The final constraint is reducing the overheads of updating main memory and maintaining multicache consistency. Predominantly, caches are a segment of a memory hierarchy and the performance of one influences the performance of others. Thus, reducing the overheads translates into less time taken to deal with other memories, which leads to better system performance. Notably, all of these constraints have to be accomplished under suitable cost constraints (Smith 475).

Functions of a Good Cache

The first function of a cache memory is the function store. Under this function, the cache stores program instructions and information that are utilized repeatedly by the CPU. Storage of such data in the cache memory ensures quick access of this information instead of having to acquire it from the main memory. The second function is the function get. Under this function, a good cache returns data based on a data request. Mainly, this function is essential because it helps the system to read the data stored in the cache. The final function of a good cache is to speed up computer operations and processing. In particular, cache memory greatly improves the overall performance of processors by bridging the widening gap between processor and memory speed.

Design of Cache in Terms of Data Safety

Encryption is a technique used to protect the confidentiality of information (Liu and Lee 1). However, such data is prone to cache-side-channel attacks based on cache access mechanisms in processors. Numerous cryptographic operations heavily utilize memory accesses to look up substitution tables, and the addresses of the memory accesses are dependent on the secret key. Thus, if the attackers know the address of the memory accesses, it is possible to infer secret key bits. The use of cache enables attackers to learn the addresses of memory accesses because the timing difference between cache hits and misses is large. However, secure cache designs can eliminate the root causes of cache-side channels. An example of such a design is the Newcache that can enhance security, performance, and power efficiency simultaneously through dynamic memory-cache remapping and eviction randomization (Liu and Lee 2).

Current Approaches to Design Cache

Split Cache Design

One current approach to the design cache memory is to split it into two parts; one for data and the other for instructions. The advantage of this approach is that it increases the potential bandwidth by permitting data and instructions to be fetched simultaneously. In addition, it allows additional selective use of different structures and strategies within each cache.

In this design, instructions tend to be clustered spatially over short segments of sequential code followed by branches to code that is often nearby. In such a case, a prefetch strategy might be effective because it is likely to bring in code that will soon be required. In an instruction cache, simplified control logic and reduced associativity reduce cache overheads. Conversely, data exhibits a greater degree of temporal locality that needs a higher degree of associativity in the cache if numerous spatially distant data items are to be stored.

Virtual or Logical Cache Design

Caches are also named by nature of the addresses they operate on. In this case, a virtual or logical cache design is created. This approach stores data using logical addresses. A virtual cache must be flushed on every context switch since the virtual addresses now refer to a new section of real memory. Thus, the data actually in the cache is no longer valid. However, virtual caches are beneficial because they decrease the hit time by not requiring address translation before cache access. For virtual addresses to be translated into physical addresses, a hardware memory management unit (MMU) is required. Hence, the virtual memory cache is placed between the CPU and the MMU as shown below.

Conclusion

Cache memories are small, high-speed buffer memories utilized in modern computer systems to hold temporarily the portions of the contents of main memory that are currently in use. The basic idea of cache organization is that by keeping the most frequently accessed information and instructions in the fast cache memory, the average memory access time will approach the access time of the cache. There are various approaches to design cache such as the split cache approach. Mainly, cache memory has various design aspects namely cache fetch algorithm, cache placement algorithms, line size, replacement algorithms such as FIFO and LRU, and I/O. optimizing the design of a cache memory normally has four constraints, which must be accomplished under suitable cost constraints. Finally, it is important to evaluate the design of a cache regarding data safety to ensure that attackers do not have access to valuable information stored in the cache.

Work Cited

Almheidat, Ahmad Nuraldin Faleh. Analysis of cache usability on modern real-time systems. MS thesis. 2013.

Liu, Fangfei, and Ruby B. Lee. "Security testing of a secure cache design." Proceedings of the 2nd International Workshop on Hardware and Architectural Support for Security and Privacy. ACM, 2013.

Smith, Alan Jay. "Cache memories." ACM Computing Surveys (CSUR) 14.3 (1982): 473-530.

Cite this page

FIFO and LRU Cache - Research Paper. (2022, Nov 08). Retrieved from https://proessays.net/essays/fifo-and-lru-cache-research-paper

logo_disclaimer
Free essays can be submitted by anyone,

so we do not vouch for their quality

Want a quality guarantee?
Order from one of our vetted writers instead

If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:

didn't find image

Liked this essay sample but need an original one?

Hire a professional with VAST experience and 25% off!

24/7 online support

NO plagiarism