# Sorting Algorithms¶

The problem of sorting is an ideal problem to introduce many concepts of fundamental algorithm design. Some good reasons to study sorting are:

• sorting is used by many applications,

• sorting is the (first) step of many algorithms,

• many techniques can be illustrated by studying sorting.

For us the last reason is most important. In particular, sorting allows us to study the algorithmic techniques that we have seen for the searching problem on a more complex algorithmic problem. In the following I give an overview of the sorting algorithms discussed in the course. Before I do so, let me define the swap routine that is used by many of the sorting algorithms.

In [1]:
def swap(A, i, j):
A[i], A[j] = A[j], A[i]


## 1. Selection Sort¶

Selection sort is possibly the conceptually simplest sorting algorithm. It finds the smallest element and places it on the first position. Then it finds the second smallest and places it at the second position, and so on.

In [9]:
def selection_sort(A):
n = len(A)
for i in range(n):
smallest = i
for j in range(i+1 , n):
if A[j] < A[smallest]: smallest = j
swap(A, smallest, i )

A = [5,2,8,6]
selection_sort(A)
print(A)

[2, 5, 6, 8]


The correctness of selection_sort can be shown by first analyzing the inner loop (lines 5-6) and then the outer loop (lines 3-7). For both steps a loop invariant proof can be used. The running time of th inner loop is $O(n-i)$. The overall running time is therefore $$O(1) + \sum_{i=0}^{n-1} O(n-i) = O(n^2)\;.$$

## 2. Insertion Sort¶

Insertion sort is an incremental algorithm. That means that it incrementally computes the solution for an increasing subset of the input. Specifically it sorts the subarray $A[0..i-1]$ for increasing $i$. While Insertion Sort is slow asymptotically, it is fast for small input.

In [1]:
def insertion_sort(A):
n = len(A)
# at the beginning of the j-th iteration the subarray A[0..j-1] is sorted
for j in range(1,n):
key = A[j]
i = j -1
while i >= 0 and A[i] > key:
A[i+1] = A[i]
i = i -1
A[i+1] = key

A = [5,2,8,6]
insertion_sort(A)
print(A)

[2, 5, 6, 8]


Also for insertion_sort the correctness can be shown by first analyzing the inner loop (lines 7-9) and then the outer loop (lines 4-10). For both steps a loop invariant proof can be used. The running time of th inner loop is $O(j)$. (Note that it is better in the best case.) The overall running time is therefore $$O(1) + \sum_{j=1}^{n-1} O(j) = O(n^2)\;.$$

## 3. Merge Sort¶

Merge Sort is an example of a recursive divide&conquer algorithm. A divide&conquer algorithm divides the problem into smaller subproblems, solves the subproblems recursively, and then combines the solutions to the subproblems.

In [4]:
def merge_sort(A):
n = len(A)
if n>1:
mid = n//2
A1 = A[:mid]
A2 = A[mid:]
# recursive calls:
merge_sort(A1)
merge_sort(A2)
# merge solutions
i=0
j=0
while i<mid and mid+j<n:
if A1[i]<A2[j]:
A[i+j]=A1[i]
i+=1
else:
A[i+j]=A2[j]
j+=1

while i<mid:
A[i+j]=A1[i]
i+=1

while mid+j<n:
A[i+j]=A2[j]
j+=1

A = [5,2,8,6]
merge_sort(A)
print(A)

[2, 5, 6, 8]


Merge Sort is a recursive algorithm and we can prove its correctness by mathematical induction. The base case is $n=1$. An array of length 1 is always sorted. In the induction step, we can assume (by the induction hypothesis) that the recursive calls work correctly. Thus, we mostly need to prove that the merge (lines 11-27) works correctly. The recurrence of the running time is $T(n)= 2 T(n/2) + O(n)$. This recurrence solves to $T(n) = O(n \log n)$ (e.g., using the Master Theorem).

## 4. Quick Sort¶

Quick Sort is another example of a recursive divide&conquer algorithm. In contrast to Merge Sort the main work of the algorithm happens before the recursive calls.

In [3]:
def partition(A, p, r):
x = A[r]
i = p-1
for j in range(p,r):
if A[j] <= x:
i+=1
swap(A, i, j)
swap(A, i+1, r)
return  i+1

def quick_sort(A, p=0, r=None):
if r==None: r=len(A)-1
if p<r:
q = partition(A, p, r)
quick_sort(A, p, q-1)
quick_sort(A, q+1, r)

A = [5,2,8,6]
quick_sort(A)
print(A)

[2, 5, 6, 8]


Quick Sort is a recursive algorithm and as for Merge Sort we can prove its correctness by mathematical induction. The crucial part is to prove that partition works correctly. The worst-case running time of Quick Sort is quadratic, but in practice it is much faster, and if the pivot is picked at random the expected running time is $O(n \log n)$. Quick Sort is a fast and simple in-place algorithm.

## 5. Counting Sort¶

There is an $\Omega (n \log n)$ lower bound for comparison-based sorting. By making assumptions on the input and the model of computation, we can obtain faster algorithms. One such algorithm is Counting Sort. Counting Sort is stable, which means that if two elements have the same key $a_i = a_j$, then the sorting algorithm does not change their order.

In [18]:
def counting_sort(A, k):
B = [0] * len(A)
C = [0] * (k+1)
for x in A:
C[x] += 1
for i in range(1,k+1):
C[i] += C[i-1]
for j in reversed(range(len(A))):
B[C[A[j]]-1] =  A[j]
C[A[j]] -= 1
return B

A = [5,2,8,6]
print(counting_sort(A, 8))

[2, 5, 6, 8]


The running time of Counting Sort is $O(n+k)$.

The running time of Counting Sort heavily depends on $k$. Radix Sort reduces this dependency by sorting the numbers in the input by

In [38]:
def counting_sort_digit(A, base, div):
k = base-1
B = [0] * len(A)
C = [0] * (k+1)
for x in A:
C[(x//div)%base] += 1
for i in range(1,k+1):
C[i] += C[i-1]
for j in reversed(range(len(A))):
index = (A[j]//div)%base
B[C[index]-1] =  A[j]
C[index] -= 1
return B


[15, 16, 18, 26]

The running time of Radix Sort is $O(d(n+base))$.