Microprocessor Case Study

1731 Words4 Pages

CHAPTER 1 INTRODUCTION 1.1 Motivation The performance of the single core processor has hit the wall because of power requirements and heat dissipations. Then Hardware industry started creating multicore CPUs. Although, they can compute millions of instructions per second, there are some computational problems that are so complex that a powerful microprocessor needs years to solve them. To build more powerful microprocessors requires an expensive and intense production process. Some computations take years to solve even with the more powerful microprocessor. May be because of these factors, programmers sometimes use a different approach called parallel processing. Parallel processing defines that two or more microprocessors handle parts of …show more content…

Therefore resulting performance increases are generally less in magnitude. Even multicore processor lag behind the GPUs floating point operations. This is because CPUs has large portion of die space to control logic and cache memory. Whereas, GPU allocates large die space to arithmetic units .Thus GPUs are able to perform floating point operation faster. 1.2 Accelerated Computing on GPU A GPU is a specialized electronic circuit helps to compute graphical and non graphical computations faster. It acts as a co-processor to a conventional CPU to speed up the computations. A CPU takes more computation time then GPU for certain programs or tasks, which have large number of iterations. Because of large number of processor cores present in GPU. Modern GPUs are very effective at image processing and manipulating computer graphics, and their parallel architecture makes them more effective than general-purpose CPUs for algorithms where parallel processing of large blocks of data is required. In a personal computer, a GPU can be present on the motherboard , or it can be on a video card or on the CPU …show more content…

Not anymore, With CUDA, we can send C, C++ and FORTRAN code straight to GPU, no assembly language required. Using high-level languages, GPU-accelerated applications run the sequential part on the CPU – which is optimized for single threaded performance – while accelerating parallel processing on the GPU. This is called "GPU computing." 1.4 Objectives Convolution is one of the most important mathematical operations that are used in signal processing heavy application. In computer graphics and image processing fields, we usually work with discrete functions (e.g. an image) and apply a discrete form of the convolution to remove high frequency noise, sharpen details, detect edges, or otherwise modulate the frequency domain of the image. In this project, an efficient implementation of convolution filters on the GPU has been implemented. In this project convolution filter has been implemented in GPU using CUDA architecture and compared with the convolution implementation in CPU. CHAPTER 2 LITRATURE

More about Microprocessor Case Study

Open Document