Parallel computing CHAPTER 3
Parallel Computing
Traditionally, one program is written and execute on a single computer and on a single processor. But it can take more time to solve a problem if it is large enough.
Hence as an extension to this Single Computation Process, a parallel approach is proposed. In which, same problem is solved either on the different processor of the same computer or on the different computer or the combination of both.
The principle behind the Parallel Computing Method is that, large problems are subdivided into smaller ones, and all these sub problems are solved at the same time, either on the same processor or on the different processor [6]. The main purpose to do that is to reduce the time
…show more content…
Main reasons to move on Parallel Computing are:
Save time: Parallelization take less time to solve a problem as compare to serial computing. Hence, today’s parallel computing becomes more useful to solve problems in less time.
Large Problem Solving: Many of the problems are very large or complex and solution of these problems are impractical and impossible to solve on a single computer specially when there is limited memory available. Hence, in such situation Parallel Computing is used to solve such problems.
Provide Concurrency: A single computer can execute single task at a time. Hence when we want to do multiple task execution it is not possible with single computer and it is the requirement to move on Parallel Computing to solved multiple task at the same time.
3.1 Flynn’s Taxonomy
According to Flynn’s Taxonomy Parallel Computing Architecture is divided into four types: SD MD
SI SISD SIMD
MI MISD MIMD
Single Instruction Single Data
…show more content…
3.3.1 Master Slave Model
The master-slave model is most popular variation of Parallel GA, this is used to extend the computing and calculation power of the simple GA. Also known as global parallelization or distributed fitness evaluation. Algorithm behind this model uses the single population, the evaluation of the individual is done sequentially and calculation of fitness or applying the genetic operators is done parallel. The selection and mating is done globally, hence the each individual can compete and mate with each other.
As the name suggest, one node becomes the master and all other are slaves. Master stores the whole population and evaluate the individuals of this population and send these individuals to different slaves for calculating the fitness or to apply the genetic operators over the individual of the population. Slaves receives the individuals calculate the fitness, and send results back to the master. This allows utilization of computing power of the different processors. And finally master node makes a selection for the optimal
Let us now see the quality of individual the population over the time. As shown below at the starting point of the algorithm individuals are of less quality. However as the time goes by population’s individuals are getting of higher quality and reaching the pick of global and local optima. The image below illustrate these stages of the algorithm.
Over the years, computer science kept evolving; leading to the emergence of what has become a standard in modern software development: Multitasking. Whether logical or physical, it has become a requirement for today's programs. In order to make it possible it became necessary to establish the notion of concurrency and scheduling. In this essay, concurrency will be discussed as well as two types of scheduling; pre-emptive used in threads and cooperative used in agents, their similarities and differences.
As is suggestive of its name, an operating system (OS) is a collection of programs that operate the personal computer (PC). Its primary purpose is to support programs that actually do the work one is interested in, and to allow competing programs to share the resources of the computer. However, the OS also controls the inner workings of the computer, acting as a traffic manager which controls the flow of data through the system and initiates the starting and stopping processes, and as a means through which software can access the hardware and system software. In addition, it provides routines for device control, provides for the management, scheduling and interaction of tasks, and maintains system integrity. It also provides a facility called the user interface which issues commands to the system software. Utilities are provided for managing files and documents created by users, development of programs and software, communicating between users with other computer systems and managing user requirements for programs, storage space and priority. There are a number of different types of operating systems with varying degrees of complexity. A system such as DOS can be relatively simple and minimalistic, while others, like UNIX, can be somewhat more complicated. Some systems run only a single process at a time (DOS), while other systems run multiple processes at once (UNIX). In reality, it is not possible for a single processor to run multiple processes simultaneously. The processor of the computer runs one process for a short period of time, then is switched to the next process and so on. As the processor executes millions of instructions per second, this gives the appearance of many processes r...
The multithreaded GUI (Graphical User Interface) based programs are able to respond to the users while performing other tasks.
Computer Technologies have brought about an unseen revolution in every field. Today not a single field is exempted from the reach of computer technologies. The technologies have not only satisfied the needs but have in turn created even greater desires. This makes the area even more challenging as one has to be on forefront to be in pace with the changes. The field of computer science is one which entices me the most, and with the help of acquired knowledge I would like to contribute to innovative ideas which improve efficiency in every sphere of life. Having acquired the fundamental concepts securely and adequately, I now wish to pursue a graduate education from your illustrious and internationally accredited university. By studying at University of California, Berkeley I would like to contribute as well as further my knowledge in field of Artificial Intelligence and Parallel Computing.
Cormen T. H, Leiserson C. E., Rivest R. L. and Stein C. [1990] (2001). “Introduction to Algorithms”, 2nd edition, MIT Press and McGraw-Hill, ISBN 0-262-03293-7, pp. 27–37. Section 2.3: Designing algorithms..
The MapReduce (Dean and Ghemawat 2004) is a model of programming aimed at processing large volumes of data, where the user specifies your application through the sequence of MapReduce operations. The tasks of parallelism, fault tolerance, data distribution and load balancing are left to the MapReduce system, simplifying the development process. From the standpoint of distributed systems, MapReduce offers the transparency of replication, distribution and synchronization.
As discussed in Section 1.3, there are many scheduling algorithms, each with its own parameters. As a result, selecting an algorithm can be difficult. The first problem is defining the criteria to be used in selecting an algorithm. The criteria are often defined in terms of CPU utilization, waiting time, response time, or throughput. To select an algorithm, we must first define the relative importance of these elements. Our criteria may include several measures, such as these:
Murakami, K.; Inoue, K.; and Miyajima, H.; “Parallel Processing RAM (PPRAM) (in English),” Japan-Germany Forum on Information Technology, Nov. 1997.
When we think of modern day technology, such as computers or 2-way pagers, we know that it is all an effort to save “time”. No longer do we have to go to the library for a small amount of information, now we can just log on the internet. No longer do we have to “waste time” going to store to buy products, we can just log on the internet and buy it there. No longer do we have to pick up the telephone to call a numerous amount of people to convey a message, we just e-mail everyone. So you see, the computer is suggested to be a time saver, a device that allows you to execute tasks more efficiently and more quickly.
J. R. Graham, “Comparing parallel programming models,” Journal of Computing Sciences in Colleges, 23(6):65-71, 2008.
The computer has changed modern society, making calculations much quicker than any person could. It is used in almost every business because of its efficiency in holding substantial amounts of information.
When an executable file is loaded into memory, it is called a process. A process is an instance of a program in executing. It contains its current activity, such as its program code and also the contents of the processor’s register. It generally includes the process stack, which contain temporary data, and a data section, which global variables. During runtime, it may include a heap, or dynamically allocated memory. In contrast with a program, a process is “an active entity, with a program counter specifying the next instruction to execute and a set of associated resources” (Operating System Concept 106). A process is a program that executes a single instance of a thread. Multiple threads can exist which allows more than one task to perform at a time. Multithreaded processes may share resources such as code, data, and file section. They do not share resources such as registers and stack.
Computer engineers outline and create computer frameworks and other innovative gadgets. Computer engineers use their extensive knowledge of hardware and software design and computer programming to make platforms and computing applications more efficient and effective. They are improving the ability of computers to “see” and “think”. With the help of the consolidation of the latest inventions, computer engineers can develop various computer hardware, design and program several useful applications and intensify the capabilities and efficiency of networks and communication
Previous work has been done to facilitate real-time computing in heterogeneous systems. Huh et al. proposed a solution for the dynamic resource management problem in real-time heterogeneous systems.GCS have become important as building blocks for fault-tolerant distributed systems. Such services enable processors located in a fault-prone network to operate collectively as a group, using the services to multicast messages to group. Our distributed problem has an analogous counterpart in the shared-memory model of computation, called the collect problem[4]