Analysis of Sorting Algorithms


Introduction

  • This page looks at benchmarks of my own implementations of various sorting algorithms, that I have conducted.
  • The x-axis on the various charts, that I have used, measures the input size of the list. The y-axis measures time in nanoseconds.
  • All the algorithms were tested on arrays.
  • The graph above shows the running times of the algorithms in the average case.
  • You cannot see merge sort and heap sort as they are obscured by quicksort on the chart.
  • In addition, optimised bubble sort refers to an implementation of bubble sort, where the algorithm exits once the list is sorted.



Θ(n2) Average Case Algorithms

  • As you can see, bubble sort and optimised bubble sort grow at the fastest rate, and the optimised verison does little to improve performance in the average case.
  • Although bubble sort, insertion sort and selection sort grow asymptotically at the same rate, there is significant constant factor between bubble sort and the other two. Whereas, insertion sort is only marginally better selection sort, by comparison.
  • These experiments support other results the insertion sort beats both bubble sort and selection sort in practice.
  • It is interesting that insertion sort beats bubble sort, as they both do the same number of swaps for the same array, where n is the size of the array.
  • This is as they do a swap for every inversion in the list. The number of inversions is the number of pairs of elements (x,y) in the input list, where x>y, and yet x appears before y in the list. In any list there are at most ${n}\choose{2}$ inversions. Hence in the average list there are $\frac{1}{2}{{n}\choose{2}}$ i.e. n(n1)4 inversions.
  • The difference between bubble sort and insertion lies in the number of comparisons. Bubble sort does n(n1)2 comparisons, whereas in insertion sort, when attempting to place every element, a comparison is made for every swap, except to determine when no further swaps are needed. This is in contrast to bubble sort that makes lots of comparisons that do not necessarily result in a swap.
  • Selection sort, in contrast, does the same number of comparisons as bubble sort, but does n1 swaps. Since selection sort, works by storing the index of the smallest element seen so far, these assignments need to be taken into account. The question is how many times, on average, when trying to find the minimum of n items, starting from the beginning, will a new minimum be found. I'm actually quite proud that I figured this out by myself, and proved the result by indution, before looking it up.
  • The result is actually quite intuitive. The base case exists when there is only 1 item, in which case there definitely will be an assignment, since no minimum exists so far. Hence the expected number of assignments is 1. Hence E[X1]=1. The inductive case is when there k+1 items. The expected number of assignments is the expected number of assignments where there are only k elements, and the probability that the last item will be the smallest so far, which is 1k+1. So E[Xn+1]=E[Xn]+1n+1. The gives the series E[Xn]=11+12+...+1n. I discovered that numbers in this series are known as harmonic numbers, Hn for a given value of n.
  • One can therefore reason that the number of assignment is Σk=1n1Hn, where there are n items in the list. The harmonic numbers actually tend towards γ+lnx, where γ is a constant. Clearly an upper bound for the number of assignments is O(nlnn). A lower bound can be found by integrating, k=1n2lnx+γdx (I choose 1, rather than 0, because ln0 is undefined, and I'm trying to find a lower bound anyway). This gives you Ω(nlnn) number of assignments, thus you have a tight bound of Θ(nlnn), and by the change of base rule, Θ(nlgn) number of assignments.
  • It should be emphasised I am merely bounding the number of assignments (excluding swaps) (for pure academic enjoyment), the average case running time of selection sort is still an unfortunate Θ(n2).



Θ(nlgn) Average Case Algorithms

  • In this benchmark, I added Java's native sorting method Arrays.sort(). Arrays.sort() as of my version of Java, Java 1.8, uses two different sorting algorithms. It uses an implementation of dual-pivot quicksort for primitives, which would have been used above. It also uses an implementation of Timsort for objects.
  • As you can see from the graphs, all algorithms are highly efficient. Although in pratcice, it appears quicksort is slighly faster merge sort, and merge sort is faster than heapsort. It should also be noted that Java's dual-pivot quicksort is slightly faster than my implementation of quicksort.
  • It is nice to know that my quicksort implementation, which came straight from CLRS, and uses Nico Lomuto's partitioning algorithm, is very close to the performance of Java's native sorting algorithm.



Running Time with Input Already in Order

  • In the first graph above, optimised bubble sort, insertion sort and heap sort are obscured by merge sort.
  • In the previous benchmarks, we saw that optimised bubble sort did little to improve bubble sort in the average case, although it does appear help when the input it nearly sorted.
  • When the input is completely sorted, this results in a best case time complexity for optimised bubble sort of Θ(n).
  • Insertion sort is another algorithm bounded by Θ(n2) in the average case, that has a best case time complexity of Θ(n) in the average case.
  • In fact in these cases, insertion sort and optimised bubble sort beat heap sort and merge sort, which both have Θ(nlgn) in the best case.
  • The opposite happens in quicksort. Whereas in some algorithms an already sorted input elicits the best case, here it elicits the worst case. This is because the list is consequently partitioned in such a way that all the elements are less than or equal to the pivot in each invocation of quicksort. This results in a recursion tree of depth Θ(n).
  • In fact, when I originally ran the benchmarks, a stack overflow error occured, because the depth of the recursion tree resulted in Θ(n) stack frames.
  • I changed my algorithm to use iteration, in place of the second invocation of quicksort, and then to only recursively call quicksort on the smaller part of the list after it is partitioned. This means for every stack frame, at least half of the list is removed, leading to Θ(lgn) stack frames, and therefore a space complexity of Θ(lgn) stack frames.
  • Unlike my implementation of quicksort, Java's implementaiton of a dual-pivot quicksort was extremely efficient and did not suffer from the same problems.



Running Time with Input in Reverse Order

  • In the first graph above, heap sort is obscured by merge sort.
  • Like when the input is in completely sorted order, worst case time complexity of Θ(n2) is elicited from quicksort.
  • Unlike when the input is completely in order, the time complexity of insertion sort and optimised bubble sort is the same as in the average case.
  • Despire the fact that quicksort's worst case behvaiour is elicited, Java's dual-pivot quicksort manages running times similar to that of when the input is already sorted.

Other Considerations

  • Of course, time complexity is not the only consideration when designing and evaluating algorithms. Correctness is an important property of algorithms, and the benchmarks I have provided would not be very helpful, if the algorithms were not correct. In the Git repository with the implementations of my algorithm, there exists a utility program to test the correctness of my implementations of the sorting algorithms. Of course, a test of a list with even a trillion items does not constitute a cast iron proof, but it a series of successful tests do inspire.
  • One particular property I like is that insertion sort, only requires serial access to its input, whereas other sorting algorithms require random access. This means it can operate on streams of data, e.g. data could be sorted as its being received over a network. It also means it work easily on structures such as linked lists, although it would still be asyptotically faster to convert them to arrays, to be sorted.
  • Another property, that has already been touched upon is space complexity. Bubble sort, insertion sort, selection sort and heap sort have constant space complexity, Θ(1). One could be forgiven for thinking quicksort also has constant space complexity, but as already discussed a good implementation will have a space complexity of Θ(lgn) due to space on the stack. However, an implementation of quicksort that does avoid calling itself on the larger subarray, that results from the partition, will have Θ(n) space complexity.
  • Merge sort has Θ(n) space complexity, used to merge two subarrays, although the space is not in the form of stack frames, so a stack overflow error won't occur.

Methodology

  • The benchmarks were run on an Intel Core i5 processor.
  • The implementations are from my algorithms-java Git repository.
  • My benchmark utility is also in that repository, also written in Java and generates CSV files of the results of the various benchmarks, I conducted.
  • From these, I created the line charts used in Excel. I found Excel's functionality to save a chart's formatting as a template and reuse it very useful.

Concluding Remarks

  • Overall, in practice when the input is random, quicksort seems to be the best sorting algorithm. There exists a version of quicksort, where the input is randomly shuffled before sorted, so that a user can forcefully elicit the worst-case behaviour by maliciously constructing a list with this purpose.
  • I have come to have particular admiration for Java's dual pivot quicksort which not only beats - by a slight margin - my own implementation of quicksort in the average case, but also destroys, I think it fair to say, the standard quicksort algorithm when the list is sorted and in reverse order.
  • In the future, if I were to perform this exercise again, I would like to use a method to construct test data with a specific number of inversions. E.g. test the algorithms with input lists with 34 of the possible inverisons. I would also like to try other variations of algorithms, e.g. different ways of partitioning in quicksort. I would also like to benchmark radix sort versus, say, quicksort for a list of n integers with a specific number of digits.
  • I should also like to commend Introduction to Algorithms, by Cormen, Leiserson, Rivest and Stein. Though, I am only part of the way through (at the time of writing), it has given me much to contemplate about.