PARALLEL ARCHITECTURAL LANDSCAPE
Parallel Computing, in its basic sense, is multiple operations being carried out simultaneously, that is, a problem can be divided into sub problems which can be solved concurrently. Throughout the history, attempts have been made and have been successful to increase the degree of parallelism in computing as much as possible. During this course, many restrictions have been encountered and alongside the possible solutions were suggested by the brightest minds. Parallelism can be divided in many ways:
1. Fine-Grained Parallelism – When the processors must communicate with each other many times per second.
Coarse-Grained Parallelism – When the processors communicate with each other once every few seconds.
2. Bit-Level Parallelism – When the number of operations to be performed are reduced by increasing the word size. The first processor launched by Intel in 1970s was 4-bit and the systems that we work on today are mostly 64-bit. This was the main source of speed-up till the mid 1980s.
Instruction-Level Parallelism - When instructions are combined into groups and then executed in parallel. Modern processors have pipelining in which each stage performs a different instruction.
FLYNN’S TAXONOMY
Single instruction Multiple instruction
Single data SISD
MISD
Multiple data SIMD
MIMD
1. SISD : Single Instruction-Single Data
This is the simplest kind of architecture. This is equivalent to an entirely sequential program and hardly employs any parallelism.
2. MISD : Multiple Instruction-Single Data
No significant applications apart from systolic arrays have been devised for this kind of architecture and therefore this classification is rarely used.
3. SIMD : Single Instruction-Multip...
... middle of paper ...
... model cannot be extended beyond 32 processors.
Parallel Computing in Future
In my opinion, the major potential in parallel computing lies in the software part. Hardware architectures have been constantly evolving since the last 40 years and sooner or later saturation may start. The number of transistors cannot keep increasing forever. Even though software has evolved, it’s still not up to pace. There is a dearth of programmers trained to design and program parallel systems. Intel recently launched Parallel Computing Center program with the main purpose as “keeping the parallel software in sync with the parallel hardware”. The international community needs to develop the parallel programming skills to keep pace with the new processors being created. As this realization spreads, the parallel architectural landscape will touch even greater heights than expected.
If you are one of the people who are not convinced by multi-core processors and are adamant that no program needs more than two cores, then you should stop reading right about now. However if you’re one that embraces technology, be it beneficial now or in the future, 2010 has to be one of the best years in CPU technology in a long time. AMD and Intel have both introduced six core CPUs and both of them have been met with some excitement, rightfully so because six cores are really better than four.
...he internet and listening to music and doing other humble task at the same time, because one tasks will go to one of the processors and the music tasks will go the other processor unless the program is coded to use multithreading.
The history of computers is an amazing story filled with interesting statistics. “The first computer was invented by a man named Konrad Zuse. He was a German construction engineer, and he used the machine mainly for mathematic calculations and repetition” (Bellis, Inventors of Modern Computer). The invention shocked the world; it inspired people to start the development of computers. Soon after,
“After the integrated circuits the only place to go was down—in size that it. Large scale integration (LS) could fit hundreds of components onto one chip. By the 1980’s, very large scale integration (VLSI) squeezed hundreds of thousands of components onto a chip. Ultra-Large scale integration (ULSI) increased that number into millions. The ability to fit so much onto an area about half the size of ...
GPUs and CPUs are used in a variety of computer systems. They can be used to even view the heavens. They are what enable us to send messages halfway across the world in a matter of milliseconds. They are the reason why science is as advanced as it is today. In modern society, teenagers rely on the CPU for the internet. It is a source of entertainment, social networking, homework help, and even sometimes friendships. Many adults use the GPU and CPU to write documents, to use email, paychecks, social security, important document storage, and even Solitaire.
...ual core processor that has two separate cores on the same processor, each with its own cache. It essentially is two microprocessors in one. In a dual core processor, each core handles arriving data strings simultaneously to improve efficiency.
Microprocessors and Angelic Self-possession: The microprocessors of today's computers are integrated circuits which contain the CPU on a single chip. The latest developments, with variable clock speeds now often exceeding 200 MHz, include Intell's Pentium chip, the IBM/Apple/Motorola PowerPC chip, as well as chips from Cyrix and AMD. The CPU chip is the heart of the computer; only memory and input-output devices have to be added. A small fan might be added on top of the fastest chips to cool them down, but in the chip itself there are no moving parts, no complex gaps between the movement being imparted and that which imparts the movement.
A port is a point at which you can attach leads from devices to the
... SoC, such as processors, memories, accelerators, and peripherals. This architectural model is often referred as parallel architecture model.
The Von Neumann bottleneck is a limitation on material or data caused by the standard personal computer architecture. Earlier computers were fed programs and data for processing while they were running. Von Neumann created the idea behind the stored program computer, our current standard model. In the Von Neumann architecture, programs and data are detained or held in memory, the processor and memory are separate consequently data moves between the two. In that configuration, latency or dormancy is unavoidable. In recent years, processor speeds have increased considerably. Memory enhancements, in contrast, have mostly been in size or volume. This enhancement gives it the ability to store more data in less space; instead of focusing on transfer rates. As the speeds have increased, the processors now have spent an increasing amount of time idle, waiting for data to be fetched from the memory. All in all, No matter how fast or powerful a...
Both computing types involve multitenancy and multitask. This means many customers can perform different tasks by accessing a single or multiple instances of resources. Sharing resources help in reducing peak load capacity [32].
A processor is the chip inside a computer which carries out of the functions of the computer at various speeds. There are many processors on the market today. The two most well known companies that make processors are Intel and AMD. Intel produces the Pentium chip, with the most recent version of the Pentium chip being the Pentium 3. Intel also produces the Celeron processor (Intel processors). AMD produces the Athlon processor and the Duron processor (AMD presents).
It’s prime role is to process data with speed once it has received instruction. A microprocessor is generally advertised by the speed of the microprocessor in gigahertz. Some of the most popular chips are known as the Pentium or Intel-Core. When purchasing a computer, the microprocessor is one of the main essentials to review before selecting your computer. The faster the microprocessor, the faster your data will process, when navigating through the software.
Software, such as programming languages and operating systems, makes the details of the hardware architecture invisible to the user. For example, computers that use the C programming language or a UNIX operating system may appear the same from the user's viewpoint, although they use different hardware architectures. When a computer carries out an instruction, it proceeds through five steps. First, the control unit retrieves the instruction from memory—for example, an instruction to add two numbers. Second, the control unit decodes the instructions into electronic signals that control the computer.
The computer has progressed in many ways, but the most important improvement is the speed and operating capabilities. It was only around 6 years ago when a 386 DX2 processor was the fastest and most powerful CPU in the market. This processor could do a plethora of small tasks and still not be working to hard. Around 2-3 years ago, the Pentium came out, paving the way for new and faster computers. Intel was the most proficient in this area and came out with a range of processors from 66 MHz-166 Mhz. These processors are also now starting to become obsolete. Todays computers come equipped with 400-600 Mhz processors that can multi-task at an alarming rate. Intel has just started the release phase of it’s new Pentium III-800MHz processor. Glenn Henry is