Parallel Computing

919 Words2 Pages

PARALLEL ARCHITECTURAL LANDSCAPE

Parallel Computing, in its basic sense, is multiple operations being carried out simultaneously, that is, a problem can be divided into sub problems which can be solved concurrently. Throughout the history, attempts have been made and have been successful to increase the degree of parallelism in computing as much as possible. During this course, many restrictions have been encountered and alongside the possible solutions were suggested by the brightest minds. Parallelism can be divided in many ways:

1. Fine-Grained Parallelism – When the processors must communicate with each other many times per second.

Coarse-Grained Parallelism – When the processors communicate with each other once every few seconds.

2. Bit-Level Parallelism – When the number of operations to be performed are reduced by increasing the word size. The first processor launched by Intel in 1970s was 4-bit and the systems that we work on today are mostly 64-bit. This was the main source of speed-up till the mid 1980s.

Instruction-Level Parallelism - When instructions are combined into groups and then executed in parallel. Modern processors have pipelining in which each stage performs a different instruction.

FLYNN’S TAXONOMY

Single instruction Multiple instruction

Single data SISD

MISD

Multiple data SIMD

MIMD

1. SISD : Single Instruction-Single Data

This is the simplest kind of architecture. This is equivalent to an entirely sequential program and hardly employs any parallelism.

2. MISD : Multiple Instruction-Single Data

No significant applications apart from systolic arrays have been devised for this kind of architecture and therefore this classification is rarely used.

3. SIMD : Single Instruction-Multip...

... middle of paper ...

... model cannot be extended beyond 32 processors.

Parallel Computing in Future

In my opinion, the major potential in parallel computing lies in the software part. Hardware architectures have been constantly evolving since the last 40 years and sooner or later saturation may start. The number of transistors cannot keep increasing forever. Even though software has evolved, it’s still not up to pace. There is a dearth of programmers trained to design and program parallel systems. Intel recently launched Parallel Computing Center program with the main purpose as “keeping the parallel software in sync with the parallel hardware”. The international community needs to develop the parallel programming skills to keep pace with the new processors being created. As this realization spreads, the parallel architectural landscape will touch even greater heights than expected.

More about Parallel Computing

Open Document