Q1. Difference between the message passing and shared memory
Message passing and shared memory are two antagonistic communication models for parallel multicomputer architectures. Contrasting the type of architecture has been very hard due to applications need to be handcrafted for a specific architecture, which results radically to various sources of contrast. It is clear that shared memory machines as of now they are easy to perform programs, in that in the future there is need of writing the programs in a high-level language and compiled to the particular parallel target purposely to eliminate the discrepancy.
The shared memory model it has the flexibility that multiple of employees can all operate on the same data. Hence, shared memory creates up numerous of the concurrency problems that are common in the parallel programming. Whereas, messaging passing system help employees communicate through the messaging system. Messages tend to keep everyone separated purposely workers can't modify the data of the other employees.
In an analogy, for instance, we are working with a given team on a certain project as a member of a group. In one model it is crowded around the table, with all our data and papers laid out. The only means of communication which is available is through changing things on the table. There is need of paying much attention not to all try to operate on the same piece of information at the same time in that it will cause confusion and things will get being mixed.
Through the messaging model, as a group, we are required to seat in our chairs with our own individual set of papers. The only way to communicate is to pass a paper to someone else in the group in the form of a message,and the workers perform what they want to do with it. Thus, as group members, we only have or available what is ahead of us. Therefore there is need of not worrying that an individual will reach be able to access information before others and alter the original information when other members are still accessing the information. In short, shared memory permits multiple processes to have to write and read information from the same location. It is through message communication which is another is the way for the process to convey the message: each process can convey messages to other processes.
The two process are similar in that they are used to convey a messagefroma source to recipient. In shared memory, there is no delay in the information sharing if one process writes the other process has the potential of accessing it instantly. Though there is message delay taking place, it is not that significant(Mubarak, Carothers, Ross, & Carns, 2017). The little delay in information communication results in different forms of information. All the two applications are purposely for message sharing. Every individual is connected to each other. For instance, if the speaking message is not sent to the focus receiver but randomly to other people who are not the intended message target. When a person speaks loud, there is sending multiple information possibilities. There is a limitation on size when the required amount has been attained,and it commences overriding older messages. These models reveal as to why individuals can't listen to numerous speakers at the same time.
Q2. The reason why the Address Resolution Protocol (ARP) functionality should be on the fast path and not in the slow path.
The Address Resolution Protocol character performs the intended function in the IP routing. Address Resolution Protocol can access the hardware address which is known as Media Access Control address of a host of its known IP address. Address Resolution Protocolcan maintain a table in which Media Access Control addresses are acknowledged to the IP addresses. Address Resolution Protocolis associated with all Cisco bodies that are known to control IP.
The IP addressing takes place in two layers that is the fast path,and the flow path of the open system of interconnection often referred as OSI which is the reference model. The need of having the Address Resolution Protocol is in the fast path is that it is useful for local transmissions between devices which are known to be directly connected(Siewiorek, & Swarz, 2017). The slow path is used indirectly with the connected devices in an internetwork surrounding. There is need of the Address Resolution Protocol be in the fast path because it is in addressing when it comes to identification as well as group devices purposely transmissions can be received or sent instantly when contrasted to the slow path. With the aim that devices will have that ease of communicating with each other when they are not regarded as part of the same network. For instance, the 48-bit MAC address has the mandate to be mapped to an IP address. Slow path protocols are utilized to perform mapping hence can't be in the slow path but rather in the fast path.
Question Three
In the use of layer 3, an Ethernet having the potential to control the transmission of frames between the switch ports connected to Ethernet cables through the use of traffic forwarding guidelines that are provided in the IEEE 802.1D is the bridging standard(White, Fisch, & Pooch, 2017). With the fact that traffic forwarding is based on address learning switches have that capability of making traffic forwarding decisions which are mainly based on the 48-bit media access control (MAC) address which is utilized in LAN standards inclusivity of the Ethernet.
To be able to perform the above, the switch can learn which device known as stations which are standard where the segments of the network by taking consideration at the source addresses in all the frames it receives. When an Ethernet sends the frame,it puts two addresses in the frame. The two addresses are known to be the destination address of the device it is sending the frame to as well as the source address which is the address of the device that is sending the frame.
Question four
In most of the systems, an 8-bit byte is the smallest addressable unit of memory. Therefore, it isn't possible to write and read the bits in the lucent bit individually with the machine code standard. With the aim of reading an individual bit form the lucent bit vector, there is need of reading the byte that is present and then masks it out. Therefore, the memory accesses which will be required for classifying a packet with F1 = 0011 and F2 =1111 is equal to 8*4*(2^10) = 32768 bits.
Question five
The sequence of steps to route messages form node (101101) to node (011010) needs to be treated as if all the links are incident to the nodes are faulty(Wu, 2017). That is if node (101101) is faulty, then the situation needs to be equivalent to instances where links
The resulting embedded ring is as a result of the ring embedding algorithm. There is the presence of faulty links in the embedded vertices hence the rerouting of the message is necessary.
References
Mubarak, M., Carothers, C. D., Ross, R. B., & Carns, P. (2017). Enabling parallel simulation of large-scale hpc network systems. IEEE Transactions on Parallel and Distributed Systems, 28(1), 87-100.
Siewiorek, D., & Swarz, R. (2017). Reliable Computer Systems: Design and Evaluatuion. Digital Press.
White, G. B., Fisch, E. A., & Pooch, U. W. (2017). Computer system and network security. CRC press.
Wu, J. (2017). Distributed system design. CRC press.
Cite this page
Computer System and Network Security Paper Example. (2022, May 09). Retrieved from https://proessays.net/essays/computer-system-and-network-security-paper-example
If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:
- Essay on Use of Audit Logics to Create Assurance on eBay
- Human Capital Google Case Analysis Paper Example
- SAP Evaluation and Data Management Paper Example
- Identifying Malicious Software Paper Example
- Essay Sample on Economic Benefits of Cloud Computing
- Position of the UK as a Global Leader in Artificial Intelligence Innovation
- Honeypot: A Tool to Detect & Block Unauthorized Access - Research Paper