A Comparison of Sorting Algorithms and their Efficiency
Introduction
Sorting algorithms are used every day to perform different tasks. Often the importance of sorting algorithms is not realised. This report explores four sorting algorithms, namely, the insertion sort, double insertion sort, recursive insertion sort and the advanced bucket sort. The experimental work done on the four sorting algorithms mentioned above is discussed in the report. Moreover, the report describes the processes used by each algorithm to sort a list of objects, the strengths and weaknesses of each algorithm and the efficiency of each algorithm. This report is relevant because sorting algorithms are used in many programs and it is important to know which sorting algorithm to use in a given situation. Also, as BSc Information Science students it is important to understand sorting algorithms as they might be required in the work environment.
Related work
Basic insertionSort
The following section investigates the basic insertionSort. This sorting algorithm sorts a list of items by inserting an element in its correct position in the list until it is sorted. Let us look at it in detail.
The basic idea of the insertionSort is to divide the list into two parts; a sorted portion and an unsorted portion. At each step of the algorithm, starting at the first element in the list, an element is moved from the unsorted portion to the sorted portion, until eventually the list is sorted. Sedgewick and Wayne 2011 states when implementing the algorithm in a computer, before inserting the current element into the emptied location, space needs to be made by moving the larger elements one position to the right in order to insert the current element in its c...
... middle of paper ...
...clusion, the experiment and report successfully described the performance of the sorting algorithms, how they work and also provoked critical thinking when contradictions were encountered.
Works Cited
Bruno R. Preiss, P., 1997. Example-Bucket Sort. http://www.brpreiss.com/books/opus4/html/page74.html.
Carrano, F. M., 2012, 2007, 2003. Data structures and Abstractions with Java. 3rd ed. New Jersey: Prentice Hall, Inc..
Chavey, D. P., 2010. Double Sorting: Testing Their Sorting Skills. Schedule of Conferences and Symposia, Volume 1, pp. 382-384.
Herschberg, D. S., 1978. Fast parallel sorting algorithms. Magazine communications of the ACM, 21(8).
Sedgewick, R. a. W., 2011. Algorithms. 4 ed. New Jersy: Pearson Education.
Stephens, R., 2013. Essential Algorithms A practical approach to computer algorithms. 1st ed. Indiana: John Wiley & Sons, Inc.
Information and Software Technology Years 7–10: Syllabus. (2003, June). Retrieved April 10, 2014, from http://www.boardofstudies.nsw.edu.au/syllabus_sc/pdf_doc/info_soft_tech_710_syl.pdf
The Big Sort, written by Bill Bishop and published in 2009, takes an in-depth, wide-ranging look into the ways that American citizens are defining themselves and forming into like-minded groups. The people of the United States are categorizing their life choices, beliefs and practices in such a way that population areas both great and small are becoming alienated and isolated from one another, clustering in particular groups that share the same or similar interests and points of view. Individuals and groups that do not share equivalent ideas or ways of life with other individuals and groups are increasingly at odds, to the point of minimal contact and knowledge with the rest of the world that exists outside of their thought community. As a result of this, people are clumping to opposing ends of spectrums, whether they are political, religious or lifestyle choices and this is causing “sorting”, referring to the book’s title.
Proceedings of the 30th Annual ACM Symposium on Theory of Computing (STOC-98), pages 151--160, New York, May 23--26
What does any kind of order have to do with math? It’s just left to right isn’t it? Wrong, the order of operations is a specific method to figuring out the correct answer to certain problems. For those of you who do not realize what I am yammering on about, this procedure piece is about the order of operations.
Johnson, R. and Foote, B. (1988) “Designing reusable classes”. In: Journal Of Object-Oriented Programming, v.1, n. 2, p. 22-35, Jun./Jul.
Barbara Mowat and Paul Warstine. New York: Washington Press, 1992. Slethaug, Gordon. A. See "Lecture Notes" for ENGL1007.
Hipschma, Ron. " The Problem -- Mountains of Data." How SETI @Home Works (1999). 29 January 2000 http://www.nitehawk.com/rasmit/.
The KMP algorithm pre-processes the pattern string to find matches of the prefixes of the pattern with the pattern itself. The information thus calculated is used to shift the pattern appropriately whenever a mismatch occurs or a comparison fails. The computation is performed by the function called KMP prefix function
[7] Elmasri & Navathe. Fundamentals of database systems, 4th edition. Addison-Wesley, Redwood City, CA. 2004.
Sorting gained a lot of importance in computer sciences and its applications are in file systems, sequential and multiprocessing computing, and a core part of database systems. A number of sorting algorithms have been proposed with different time and space complexities. There is no one sorting algorithm that is best for each and every situation. Donald Knuth in [1] reports that “computer manufacturers of the 1960s estimated that more than 25 percent of the running time on their computers was spend on sorting, when all their customers were taken into account. In fact, there were many installations in which the task of sorting was responsible for more than half of the computing time.” Sorting is a significant concept whenever we study algorithms. Knuth divides the taxonomy of sorting...
Ceruzzi, P. E. (1998). A history of modern computing (pp. 270-272). London, England: The MIT Press.
middle of paper ... ...2008. The 'Standard' of the 'Standard'. Web. The Web. The Web.
[1]- Ralph Stair, George Reynolds and Thomas Chesney. 2012. Fundamentals of Business information systems. 2nd edition: Cengage Learning EMEA.
HAND, D. J., MANNILA, H., & SMYTH, P. (2001).Principles of data mining. Cambridge, Mass, MIT Press.
In this paper he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in Codasyl, Codd's idea was to use a "table" of fixed-length records. A linked-list system would be very inefficient when storing "sparse" databases where some of the data for any one record could be left empty. The relational model solved this by splitting the data into a series of tables, with optional elements being moved out of the main table to where they would take up room only if needed.