2.1 LITERATURE REVIEW ON MULTIPLIERS
Multiplier is key fundamental component for many high performance systems such as FIR-filters, microprocessors, digital processors, multimedia applications etc. Multiplier plays main role in system performance [3] [4] [5] because multiplier is generally the slowest element in the system. Furthermore, it occupied a large silicon area. Hence, optimizing speed and area are major design issues however area and speed both are conflicting performance constraints’.
There we are talking about digital multiplier which multiplies two binary numbers, because In new age where digital systems are capturing market at the place of analog systems thus for processing data and for many other requirements there is a huge demand for high speed low area as well as low power consuming multiplier. Multiplier is the slowest and more area consuming part in any system thus for performing multiplication, we do not have any direct tool. We have used adders for implementation of multiplier which made any multiplier slow and biggest part of any processor or ALU.
There is a previously we have make a start both these algorithms then there came further modification for them are as array multiplier and parallel multiplier then these all have became more enhance and there came other algorithms. There multiplier can be divided on various types on the basis of data processing they are classified as serial multiplier parallel multiplier and serial-parallel multiplier .In parallel multipliers, there are two main classifications. They are array and tree multipliers. C.S. Wallace proposed a tree multiplier architecture which performs high speed multiplication. Baugh-Woolley multiplier [6, 18] is also an array multiplier but can perfo...
... middle of paper ...
...mpared to Booth encoded radix-4 multiplier [7].
First, they extended GF-ACG to describe any GF based on NB in addition to polynomial basis (PB). We then presented a formal design of Massey-Omura parallel multipliers using the extended GF-ACG and showed that the verification time was greatly reduced as compared with that of the conventional methods. For example, a multiplier over GF (264) was verified within 7 minutes. As a further application, we designed the exponentiation circuits based NB and evaluate the performance in comparison with that of the corresponding PB-based circuits. The proposed method is applicable for both binary and multiple-valued implementations since the GF-ACG description is technology-independent except for the lowest-level description. The formal design of GF arithmetic circuits based on both PB and NB would remain in the future study [9].
For over thirty years, since the beginning of the computing age, the Gordon Moore's equation for the number of chip transistors doubling every eighteen months has been true (Leyden). However, this equation by its very nature cannot continue on infinitely. Although the size of the transistor has drastically decreased in the past fifty years, it cannot get too much smaller, therefore a computer cannot get much faster. The limits of transistor are becoming more and more apparent within the processor speed of Intel and AMD silicon chips (Moore's Law). One reason that chip speeds now are slower than possible is because of the internal-clock of the computer. The clock organizes all of the operation processing and the memory speeds so the information ends at the same time or the processor completes its task uniformly. The faster a chip can go (Mhz) requires that this clock tick ever and ever faster. With a 1.0 Ghz chip, the clock ticks a billion times a second (Ball). This becomes wasted energy and the internal clock limits the processor. These two problems in modern computing will lead to the eventual disproving of Moore's Law. But are there any new areas of chip design engineering beside the normal silicon chip. In fact, two such designs that could revolutionize the computer industry are multi-threading (Copeland) and asynchronous chip design (Old Tricks). The modern silicon processor cannot keep up with the demands that are placed on it today. With the limit of transistor size approaching as well the clock speed bottleneck increasing, these two new chip designs could completely scrap the old computer industry and recreate it completely new.
The history of computers is an amazing story filled with interesting statistics. “The first computer was invented by a man named Konrad Zuse. He was a German construction engineer, and he used the machine mainly for mathematic calculations and repetition” (Bellis, Inventors of Modern Computer). The invention shocked the world; it inspired people to start the development of computers. Soon after,
This would multiply the value stored in %eax by the operand of mul, which in this case would be 10*10. The result is then implicitly stored in EDX:EAX. The result is stored over a span of two registers because it has the potential to be considerably larger than the previous value, possibly exceeding the capacity of a single register(this is also how floating points are stored in some cases, as an interesting sidenote).
In today’s world most of the applications uses FPGA to process the data in the real time and prototyping. The demand for FPGA is increasing because of its performance and reprogramability. The basic building block of FPGA is Logic Block(Configuration Logic Block or Versatile). Many applications, vendors claims the utilization of FPGA or FPGA density in terms of gate counts.
Not only was the speed revolutionary, but it also had the capability of multitasking, meaning that it could calculate data for several applications at once. Before the 286, multitasking was possible only in the most advanced processors at very slow speeds.
Let me start off with some background information of the ALU. The Arithmetic Logic Unit (ALU) is a digital circuit which performs arithmetic and logic operations. It does basic arithmetic such as addition, subtraction, multiplication, and division. The ALU also has the ability to do logic operations, such as OR, AND, NOT, and many others. The ALU is what does most of the operations that a Central Processing Unit (CPU) does. Due to the ALU’s ability to do these tasks, the ALU is considered as the cornerstone of the CPU. Now that we have gone over the background information on the ALU, let me go into describing the processing and interdependencies of the ALU.
Recently, a methodology for implementing lifting based DWT has been proposed because of lifting based DWT has many advantages over convolution based one [3-5]. The lifting structure largely reduces the number of multiplication and accumulation where filter bank architectures can take advantage of many low power constant multiplication algorithms. FPGA is used in general in these systems due to low cost and high computing speed with reprogrammable property.
The time structure of a computer is described as this: “the central processor of the computer contains within it an electronic clock, whose extremely rapid pulses determine when one operation has ended and another is to begin” (J.D. Bolter). This is measured by megahertz or the newest form of speed is called gigahertz. Therefore more tasks can be executed in less time.
In the past few decades, one field of engineering in particular has stood out in terms of development and commercialisation; and that is electronics and computation. In 1965, when Moore’s Law was first established (Gordon E. Moore, 1965: "Cramming more components onto integrated circuits"), it was stated that the number of transistors (an electronic component according to which the processing and memory capabilities of a microchip is measured) would double every 2 years. This prediction held true even when man ushered in the new millennium. We have gone from computers that could perform one calculation in one second to a super-computer (the one at Oak Ridge National Lab) that can perform 1 quadrillion (1015) mathematical calculations per second. Thus, it is only obvious that this field would also have s...
The computer has changed modern society, making calculations much quicker than any person could. It is used in almost every business because of its efficiency in holding substantial amounts of information.
What is math? If you had asked me that question at the beginning of the semester, then my answer would have been something like: “math is about numbers, letters, and equations.” Now, however, thirteen weeks later, I have come to realize a new definition of what math is. Math includes numbers, letters, and equations, but it is also so much more than that—math is a way of thinking, a method of solving problems and explaining arguments, a foundation upon which modern society is built, a structure that nature is patterned by…and math is everywhere.
"programming" rules that the user must memorize, all ordinary arithmetic operations can be performed (Soma, 14). The next innovation in computers took place in 1694 when Blaise Pascal invented the first “digital calculating machine”. It could only add numbers and they had to be entered by turning dials. It was designed to help Pascal’s father who
Why do we really need to learn about you every single day for almost a whole hour? Math is important but not to the point where we need to learn how to find the square root of 625 and then subtract X to get how many apples are in a barrel. We will probably never use you in our lives.
The history of the computer dates back all the way to the prehistoric times. The first step towards the development of the computer, the abacus, was developed in Babylonia in 500 B.C. and functioned as a simple counting tool. It was not until thousands of years later that the first calculator was produced. In 1623, the first mechanical calculator was invented by Wilhelm Schikard, the “Calculating Clock,” as it was often referred to as, “performed it’s operations by wheels, which worked similar to a car’s odometer” (Evolution, 1). Still, there had not yet been anything invented that could even be characterized as a computer. Finally, in 1625 the slide rule was created becoming “the first analog computer of the modern ages” (Evolution, 1). One of the biggest breakthroughs came from by Blaise Pascal in 1642, who invented a mechanical calculator whose main function was adding and subtracting numbers. Years later, Gottfried Leibnez improved Pascal’s model by allowing it to also perform such operations as multiplying, dividing, taking the square root.
Supercomputers is founded at 1960s by Seymour Roger Cray at control data coportation, and it have been used for science and design. Supercomputers is the fastest computer among all computers such as embedded computers, personal comnputers, servers and mainframes. Supercomputers have high speed and large amount of processors in it. Supercomputers are used for large companies or corporation.