Evolution Of GPU Computing

1732 Words4 Pages

1. Introduction
When it comes to computation speed, GPU provides a huge edge over the CPU making the GPU one of the most interesting areas of research in the field of modern industrial research and development.
GPU is a graphical processing unit which enable you to run high definitions graphics on your personal computer, which are the demand of present day computing. Like the Central Processing Unit (CPU), it is a single-chip processor. However, the central processing unit, at latest, has 4 or 8 cores as compared to the hundreds of cores of graphics processing unit. The main reason why GPUs were made was for graphical purposes. Since calculations relating to graphics are a very heavy burden on the central processing unit, the graphics processing unit can help the computer run more efficiently. As stated earlier, graphics processing unit s came into existence for graphical purpose but as of now it’s existence has additional purposes like computing, precision and performance.
The evolution of the graphics processing unit over the years has been towards a better floating point performance. NVIDIA introduced its parallel architecture called “Compute Unified Device Architecture” in 2006-2007 and change the outlook of GPU computing. The CUDA has a significant number of processor cores that work together to chomp the data set given in the application. GPU computing or General Purpose GPU is the use of a GPU (Graphics Processing Unit) to do general purpose scientific and engineering computing. The model for GPU computing is to use a central processing unit and a graphics processing unit together in a heterogenous co-processing computing model. he computationally-intensive part is accelerated by the GPU and the sequential part of the appli...

... middle of paper ...

... unified source code encompassing both host and device code, though, the host code is just straight forward C code. The device code or GPU code is written using syntaxes for CUDA to label data-parallel functions and their associated data structures. When a GPU device is unavailable, the device code will still work but the central processing unit will be responsible for executing the code and the execution speed will be very slow compared to the execution speed when a GPU device executed the part of the code. This is possible because of the help of emulation features and this features are provided by the CUDA software development kit.
Another advantage of CUDA is that you don’t have to write the whole program using CUDA technology. You could simply write kernel calls to call CUDA functions for faster processing of large mathematical computations whenever you need to.

More about Evolution Of GPU Computing

Open Document