Name Average Worst Memory Stable Method
Bubble sort O(n2) O(n2) O(1) Yes Exchanging
Selection sort O(n2) O(n2) O(1) No Selection
Insertion sort O(n2) O(n2) O(1) Yes Insertion
Merge sort O(n log n) O(n log n) O(n) Yes Merging
Quicksort O(n log n) O(n2) O(1) No Partitioning
Heapsort O(n log n) O(n log n) O(1) No Selection
Bubble Sort
Bubble sort is a simple sorting algorithm. It works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted. Because it only uses comparisons to operate on elements, it is a comparison sort.

Step-by-Step Example

Assume we have an array "5 1 4 2 8" and we want to sort the array from the lowest number to the greatest number using bubble sort.

Pseudocode Implementation

procedure bubbleSort( A : list of sortable items ) defined as:
    swapped := false
    for each i in 0 to length(A) - 1 inclusive do:
      if A[i] > A[i+1] then
        swap( A[i], A[i+1] )
        swapped := true
      end if
    end for
  while swapped
end procedure

An Improved Alternative Implementation

procedure bubbleSort( A : list of sortable items ) defined as:
  n := length( A )
    swapped := false
    for each i in 0 to n - 1 inclusive do:
      if A[ i ] > A[ i + 1 ] then
        swap( A[ i ], A[ i + 1 ] )
        swapped := true
      end if
    end for
    n := n - 1
  while swapped
end procedure

Here, instead of doing n(n-1) comparisons, we reduce it to (n-1) + (n-2) + ... + 1 = n(n-1)/2 comparisons.


  • Worst case performance: O(n2)
  • Best case performance: O(n)
  • Average case performance: O(n2)
  • Worst case space complexity: O(n) total, O(1) auxiliary

Bubble sort is not a practical sorting algorithm when n is large.


Selection Sort
Selection sort is an in-place comparison sort. It has O(n2) complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort. Selection sort is noted for its simplicity, and also has performance advantages over more complicated algorithms in certain situations.


  1. Find the minimum value in the list
  2. Swap it with the value in the first position
  3. Repeat the steps above for the remainder of the list (starting at the second position and advancing each time)

Effectively, we divide the list into two parts: the sublist of items already sorted and the sublist of items remaining to be sorted.

Example: Consider an example of sorting "64 25 12 22 11".

Java Implementation of the Algorithm

void selectionSort(int[] a) {
  for (int i = 0; i < a.length - 1; i++) {
    int min = i;
    for (int j = i + 1; j < a.length; j++) {
      if (a[j] < a[min]) {
        min = j;
    if (i != min) {
      int swap = a[i];
      a[i] = a[min];
      a[min] = swap;


  • Worst case performance: O(n2)
  • Best case performance: O(n2)
  • Average case performance: O(n2)
  • Worst case space complexity: O(n) total, O(1) auxiliary
Insertion Sort
Insertion sort is a comparison sort in which the sorted array (or list) is built one entry at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort. However, insertion sort provides several advantages:
  • Simple implementation
  • Efficient for small data sets
  • Adaptive, i.e., efficient for data sets that are already substantially sorted: the time complexity is O(n+d), where d is the number of inversions
  • More efficient in practice than most other simple quadratic algorithms such as selection sort or bubble sort: the average running time is n2/4, and the running time is linear in the best case
  • Stable, i.e., does not change the relative order of elements with equal keys
  • In-place, i.e., only requires a constant amount O(1) of additional memory space
  • Online, i.e., can sort a list as it receives it


Every iteration of insertion sort removes an element from the input data, inserting it into the correct position in the already-sorted list, until no input elements remain. The choice of which element to remove from the input is arbitrary, and can be made using almost any choice algorithm.

Sorting is typically done in-place. The resulting array after k iterations has the property where the first k+1 entries are sorted. In each iteration the first remaining entry of the input is removed, inserted into the result at the correct position, thus extending the result.

Example: Consider an example of sorting "64 25 12 22 11".

Pseudocode Implementation:

insertionSort(array A)
  for i := 1 to length[A] - 1 do
    value := A[i];
    j := i - 1;
    while j >= 0 and A[j] > value do
      A[j + 1] := A[j];
      j := j - 1;
    A[j + 1] := value;


  • Worst case performance: O(n2)
  • Best case performance: O(n)
  • Average case performance: O(n2)
  • Worst case space complexity: O(n) total, O(1) auxiliary
Merge Sort
Merge sort is an O(n log n) comparison-based sorting algorithm. It is an example of the divide and conquer algorithmic paradigm.


Conceptually, a merge sort works as follows:

  1. If the list is of length 0 or 1, then it is already sorted. Otherwise:
  2. Divide the unsorted list into two sublists of about half the size.
  3. Sort each sublist recursively by re-applying merge sort.
  4. Merge the two sublists back into one sorted list.

Merge sort incorporates two main ideas to improve its runtime:

  1. A small list will take fewer steps to sort than a large list.
  2. Fewer steps are required to construct a sorted list from two sorted lists than two unsorted lists. For example, you only have to traverse each list once if they're already sorted.
Algorithm MergeSort(A, 0, n-1)
  MergeSort(A, 0, n/2)
  MergeSort(A, n/2 + 1, n-1)
  MergeTogether(2 arrays above)

Example: sort the following numbers 35, 62, 33, 20, 5, 72, 48, 50.


  • Worst case performance: O(n log n)
  • Best case performance: O(n log n) typical
  • Average case peroformance: O(n log n)
  • Worst case space complexity: O(n) total, O(n) auxiliary
Quicksort is a well-known sorting algorithm  that, on average, makes O(n log n) comparisons to sort n items. However, in the worst case, it makes O(n2) comparisons. Typically, quicksort is significantly faster than other O(n log n) algorithms, because its inner loop can be efficiently implemented on most architectures, and in most real-world data, it is possible to make design choices which minimize the probability of requiring quadratic time.

Quicksort is a comparison sort and, in efficient implementations, is not a stable sort.


Quicksort sorts by employing a divide and conquer strategy to divide a list into two sub-lists.

The steps are:

  1. Pick an element, called the pivot, from the list.
  2. Reorder the list so that all elements which are less than the pivot come before the pivot and so that all elements greater than the pivot come after it (equal values can go either way). After this partitioning, the pivot is in its final position. This is called the partition operation.
  3. Recursively sort the sub-list of lesser elements and the sub-list of greater elements. The base case of the recursion are lists of size zero or one, which are always sorted.
function quicksort(array)
    var list less, greater
    if length(array) <= 1
      return array
    select and remove a pivot from array
    for each x in array
      if x <= pivot then append x to less
      else append x to greater
    return concatenate(quicksort(less), pivot, quicksort(greater))

Quicksort is similar to merge sort in many ways. It divides the elements to be sorted into two groups, sorts the two groups by recursive calls, and combines the two sorted groups into a single array of sorted values. However, the method for dividing the array in half is much more sophisticated than the simple method we used for merge sort. On the other hand, the method for combining these two groups of sorted elements is trivial compared to the method used in mergesort.

The correctness of the partition algorithm is based on the following two arguments:

  • At each iteration, all the elements processed so far are in the desired position: before the pivot if less than or equal to the pivot's value, after the pivot otherwise.
  • Each iteration leaves one fewer element to be processed.

The disadvantage of the simple version above is that it requires O(n) extra storage space, which is as bad as merge sort. The additional memory allocations required can also drastically impact speed and cache performance in practical implementations. There is a more complex version which uses an in-place partition algorithm and use much less space.

The partition pseudocode:

private static int partition(int[] data, int first, int n) {
  1. Initialize values:
       pivot = data[first];
       tooBigIndex = first + 1;         // Index of element after pivot
       tooSmallIndex = first + n - 1;   // Index of last element

  2. Repeat the following until the two indices cross each other
     2a. while tooBigIndex is not yet beyond the final index of the part of the array we are partitioning,
         and data[tooBigIndex] is less than or equal to the pivot, move tooBigIndex to the next index.

     2b. while data[tooSmallIndex] is greater than the pivot, move tooSmallIndex down to the previous index.
     2c. if (tooBigIndex < tooSmallIndex), swap the values of data[tooBigIndex] and data[tooSmallIndex].

  3. Move the pivot element to its correct position at data[tooSmallIndex], return tooSmallIndex

Choosing a Good Pivot Element

The choice of a good pivot element is critical to the efficiency of the quicksort algorithm. If we can ensure that the pivot element is near the median of the array values, then quicksort is very efficient. One technique that is often used to increase the likelihood of choosing a good pivot element is to randomly choose three values from the array and then use the middle of these three values as the pivot element.

Let's try the quicksort algorithm with the following array: 40, 20, 10, 80, 60, 50, 7, 30, 100, 90, and 70.