Introduction
In computing, memory is often defined as any physical device with the capability of storing data or information temporarily or permanently. The memory of the computer is often classified as volatile and nonvolatile. The volatile memory is the types of memory that lose the content when the computing device loses power, example Random Access Memory. Nonvolatile on the other hand keeps its content when the computer loses its power; example Read Only Memory. The RAM is often referred to as the primary storage while the ROM is often referred to as the auxiliary storage or the secondary storage. Computers are built to operate at high speeds. Thus, the Random Access Memory was designed specifically to support such speeds. The computer system also consists of other storages to allow slow access of information at higher capacities. The contents of the Random Access Memory can, however, be transferred to secondary storage devices through virtual memory. These memory devices within the computer system work together to provide storage for the computer system.
The memory devices of the computer system are arranged within a memory hierarchy which classifies the devices regarding cost and performance. From the fastest and the most expensive to the cheapest and the slowest memory types in the hierarchy, we have the registers, cache, main memory, secondary storage, and the off-line storage. The operations of these memory devices are determined by the computer architecture which defines the rules and methods which determines the functionality, the organization, as well as the implementation of the computer system. The computer architecture involves the instruction set, the logic design, the microarchitecture, the architecture design, and the implementation of any given design within the computer system. The design of the computer system takes into the account the SISD, MIMD, SIMD, SPMD, and Vector which determines how the instruction streams in the memory architecture are operated on by the processor of the computer system (Patterson, Hennessy & Alexander, 2015).
The processor can either utilize a single thread or employ the hardware multithreading concept which makes use of multiple threads to accomplish the given task. The multithreading can be done through the fine-grained multithreading, which switches through threads on every instruction, or the coarsely grained multithreading, which switches only on costly calls (Wolf, 2016). The multithreading process can also be done simultaneously while utilizing the resources of multiple issues. The simultaneous multithreading provides an option of lowering cost of multithreading through the utilization of resources which are needed for multiple issues in a dynamically scheduled microarchitecture (Ruokamo, 2018).
The multithreading of hardware greatly improved the efficiency of processors. However, delivering on the performance potential of the Moore's Law, which predicted the doubling up of the transistors to the foreseeable future, has been a major challenge (Chaudhuri, 2008). Over again, computer designers have been tasked with finding a solution to the existing problem which constitutes re-writing the old programs to run on the modern parallel hardware platforms (Shipman, 2016). Various answers have come up in the attempt to answer the question. However, the one which stands out is the provision of the single physical address space shared by all processors as a solution to the data access by programs as well as the parallel execution of these programs. Implementation of this approach in the design of the computer systems avails the variable of any given program at any time they are required. The solution to this existing problem can also be implemented by having a separate address space per given processor which requires explicit sharing.
The computer memory architecture defines the methods used in the implementation of the computer data storage by combining the fastest, most durable, most reliable, as well as the least expensive ways of storing and retrieving information. Compromising any of these requirements in the design of a memory system is often done to improve the other requirements. The memory architecture also determines the binary conversions into electric signals which are later stored within the memory cells. Thus, the structure of the memory cell is determined by the memory architecture. Every type of memory has the specific properties that make it unique giving it the ability to perform various functions. For example, the dynamic memory used as the primary data storage has a fast access speed and has to be refreshed by the use of surge current to maintain the stored data. On the other hand, flash memory which allows long-term storage consist of static memory cells which wear our when frequently used. Data bus on the other hand, which provides the transport of data within the computer system, is often designed in line with the mode of data access. The data transferred to the data bus is often accessed through the serial means or parallel means (Irabashetti, Gawali Anjali, & Betkar Akshay, 2014). In such situations, the memory is always designed to provide parity error detection or correction in critical systems.
The computer systems make use of different types of memories. These types of memory include the semiconductor memory, Universal Serial Baser (USB) Sticks, Magnetic Disks, DVDs and many others. These memories are primarily used to hold data and programs. However, the types of memories differ due to their uses and characteristics. Any given computer architecture often consist of three types of memories which include the register memory, main memory, and the disk memory.
Conclusion
The register memories are found within the Central Processing Unit (CPU) of a computer system. The registers are often small in size and limited in number. They come in 64 and 32 bits. The registers store contents which can be quick access to be 'read' or 'written.' The register memory is often faster than the main memory of the computer. Thus, it has an order of magnitude faster than the main memory or the disk memory. The CPU consist of different kinds of the register such as the general purpose register and the special purpose registers. The general purpose register is for the general use by the programmer. In the memory architecture, there are always between 16 to 64 general purpose registers. On the other hand, the special purpose register is used for specific and non-programmable tasks often within the CPU or accessed by the use of special instructions by the programmer (Rao & Sundaresan, 2016). Examples of the special purpose register include the Program Counter (PC), the Instruction Register (IR), the Condition Code (Flags/Status) register, the ALU Input and Output register, and the Stack Pointer register (SP). The sizes of the registers often vary with the type of register. The Word Size of any given memory architecture is often determined by the general purpose register. These registers are referenced directly by specific instructions within the computer programs. Using the assembly language which is used in programming the CPU, the registers are specified with identifiers such as R0, R1, R7, SP, and PC. Nonetheless, registers are volatile memories and loses content when the power is lost. Thus, cannot be used for long-term storage.
References
Chaudhuri, P. P. (2008). Computer organization and design. India, II: Prentice-Hall.
Irabashetti, P. S., Gawali Anjali, B., & Betkar Akshay, S. (2014). Architecture of parallel processing in computer organization. American Journal of Computer Science and Engineering, 1(2), 12-17.
Patterson, D. A., Hennessy, J. L., & Alexander, P. (2015). Computer organization and design: The hardware/ software interface. Amsterdam: Morgan Kaufmann.
Rao, J. N., & Sundaresan, M. (2016). U.S. Patent No. 9,514,559. Washington, DC: U.S. Patent and Trademark Office.
Ruokamo, A. (2018). Parallel computing and parallel programming models: application in digital image processing on mobile systems and personal mobile devices.
Shipman, G. M. (2016). Programming Models in HPC (No. LA-UR--16-26424). Los Alamos National Lab.(LANL), Los Alamos, NM (United States).Wolf, J. (2016). Implementation of a backend to ISPC using HPX.
Cite this page
Essay on Computer Memory Architecture. (2022, May 22). Retrieved from https://proessays.net/essays/essay-on-computer-memory-architecture
If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:
- Essay on Social Media in China and America and Its Economic Impact
- Essay on Personal Learning Network
- Electronic Health Record Competitive Analysis Paper Example
- Essay Example on CRM Systems: Retaining Clients Through Solutions, Not Apologies
- Women in Cyber Security Speech
- Essay Sample on Efficient Tech Practices: Key to Business Success in the Digital Age
- The 5-G Network and Its Ethical Issues - Paper Sample