Literature Review On Cache Coherence Protocols
Shared memory multiprocessors provide the advantage of sharing code and data structures among the processors comprising the parallel application. As a result of sharing, multiple copies of the shared block exist in one or more caches at the same time. The copies of the shared block existing in different aches must be consistent.t. This is called as the cache coherence problem.
Various protocols have been designed to ensure coherence in hardware and policies made to prevent the existence of shared writable data in more than one cache at the same time.
The Hardware cache coherent protocols include snoopy cache coherence protocols, directory cache coherence protocols and cache-coherent network architectures.
1.Snoopy Cache Coherence Protocols
Snoopy cache coherence protocols are best suited for bus-based, shared memory
Multiprocessors as they take make use of the broadcast capability of the single interconnect. The snoopy cache coherence protocols can be divided into two main categories
Write Invalidate and Write Update.
1.1 Write Invalidate Protocols
In Write Invalidate protocols, the processor that modifies a block of shared data invalidates all other copies of that shared data in other caches and then updates its own without further bus operations. There are four protocols that fall under this category:
1.1.1 Goodman Protocol
This protocol was proposed by Goodman in 1983 and was the first write-invalidate protocol. It is also known as Write-once protocol.
This protocol associates a state with each cached copy of shared data block. The different states that can be associated with the block are as follows:
• VALID: The copy of block is consistent with the memory copy.
• INVALID: T...
... middle of paper ...
...Cache Coherence Protocol
2.3 Chained Directories Cache-Coherence Protocol
It keeps track of the shared copies of data by maintaining a chain of directory pointers hence called chained directories protocol.
Fig. Chained Directories Cache Coherence Protocol
Suppose that there are no shared copies of location X. If processor P1 reads location X the memory sends a copy together with a chain termination (CT) and keeps a pointer to P1. Subsequently, when processor P2 reads location X, the memory sends a copy to the cache of processor P2 along with a pointer to the cache of processor P1. If processor P3 writes to location X, it is necessary to send a data invalidation message down the chain. To ensure sequential consistency, the memory module denies processor P3 write permission until the processor with the chain termination acknowledges the invalidation of the chain.
In this lab, we used Transmission Control Protocol (TCP) which is a connection oriented protocol, to demonstrate congestion control algorithms. As the name itself describes, these algorithms are used to avoid network congestion. The algorithms were implemented in three different scenarios i.e. No Drop Scenario, Drop_Fast Scenario and Drop_NoFast Scenario.
In order to prevent both intentional and unintentional alteration, and destruction of information, any software application needs controls to ensure the reliability of data. Here are two specific controls per each one of the three data control categories, and how each control contributes to ensuring the data reliability in the format requested.
DFS guarantees clients all functionality all the time when clients are connected to the system. By replicating files and spreading them into different nodes, DFS gives us a reliability of the whole file system. When one node has crash, it can service the client with another replica on different node. DFS has a reliable communication by using TCP/IP, a connection-oriented protocols. Once a failure occurred, it can immediately detect it and set up a new connection. For the single node storage, DFS uses RAID (Redundant Array of Inexpensive/Independent Disks) to prevent hard disk drive failure by using more hard disk, uses journal technique or strategy to prevent inconsistency state of the file system, and uses an UPS (Uninterruptible Power Supply) to allow the node to save all critical data.
The first is called store to forward, which is used transferring digital images from one location to another (Wager, Lee, & Glaser, 2013, p. 157).
In the WMM memory is considered an active process and not just a passive store of information, unlike the MSM.
Last phase is data exchange. In data exchange, client and server exchanges the data by creating one or more data channels. In each channel, flow is control using window space available. There are 3 stages of the life of the channel: open channel, data transfer and close channel. One the channel is open by either of the party, data is transferred and then channel is closed by either of the party [3].
Virtualization technologies provide isolation of operating systems from hardware. This separation enables hardware resource sharing. With virtualization, a system pretends to be two or more of the same system [23]. Most modern operating systems contain a simplified system of virtualization. Each running process is able to act as if it is the only thing running. The CPUs and memory are virtualized. If a process tries to consume all of the CPU, a modern operating system will pre-empt it and allow others their fair share. Similarly, a running process typically has its own virtual address space that the operating system maps to physical memory to give the process the illusion that it is the only user of RAM.
Peer-to-peer is a communications model in which each party has the same capabilities and either party can initiate a communication session. Other models with which it might be contrasted include the client/server model and the master/slave model. In some cases, peer-to-peer communications is implemented by giving each communication node both server and client capabilities. In recent usage, peer-to-peer has come to describe applications in which users can use the Internet to exchange files with each other directly or through a mediating server.
2. To transfer files from one computer to another (the files may be text, images, audio, video, etc.).
In simple terms, it's just a storage located remotely which you can access anywhere. It's like storing your files online and accessing it anywhere while using your laptop, mobile device or another PC.
Paging is one of the memory-management schemes by which a computer can store and retrieve data from secondary storage for use in main memory. Paging is used for faster access to data. The paging memory-management scheme works by having the operating system retrieve data from the secondary storage in same-size blocks called pages. Paging writes data to secondary storage from main memory and also reads data from secondary storage to bring into main memory. The main advantage of paging over memory segmentation is that is allows the physical address space of a process to be noncontiguous. Before paging was implemented, systems had to fit whole programs into storage, contiguously, which would cause various storage problems and fragmentation inside the operating system (Belzer, Holzman, & Kent, 1981). Paging is a very important part of virtual memory impl...
One of the latest advancements in wireless data. It is used in GSM (Global System for Mobile Communications) for transferring data in packets.
... middle of paper ... ... TCP/IP operates at levels 3 and 4 of the OSI model.
The Von Neumann bottleneck is a limitation on material or data caused by the standard personal computer architecture. Earlier computers were fed programs and data for processing while they were running. Von Neumann created the idea behind the stored program computer, our current standard model. In the Von Neumann architecture, programs and data are detained or held in memory, the processor and memory are separate consequently data moves between the two. In that configuration, latency or dormancy is unavoidable. In recent years, processor speeds have increased considerably. Memory enhancements, in contrast, have mostly been in size or volume. This enhancement gives it the ability to store more data in less space; instead of focusing on transfer rates. As the speeds have increased, the processors now have spent an increasing amount of time idle, waiting for data to be fetched from the memory. All in all, No matter how fast or powerful a...
In designing a computer system, architects consider five major elements that make up the system's hardware: the arithmetic/logic unit, control unit, memory, input, and output. The arithmetic/logic unit performs arithmetic and compares numerical values. The control unit directs the operation of the computer by taking the user instructions and transforming them into electrical signals that the computer's circuitry can understand. The combination of the arithmetic/logic unit and the control unit is called the central processing unit (CPU). The memory stores instructions and data.