In this guide, we’re going to explore the different ways we can sort a collection.

How can we get from [3,1,4,2,6,5] to [1,2,3,4,5,6] efficiently?

Following along is simple. Just download Scala, open your command prompt, type scala to open your REPL, and enter the code line by line (resist the urge to copy and paste). That’s it!

Why is this important?

It’s true that most languages have libraries that allow application programmers to abstract away the process of sorting a collection, but learning sorting algorithms is still a great way to understand how algorithms work in general (mergesort, for example, is a phenomenal example of divide-and-conquer, and quicksort’s partition step is often handy in non-sorting algorithms).

If you must roll your own sorting algorithm, knowing quicksort is enough for most use cases. C and Java use it as the algorithm for their library sort methods.

So is quicksort the best sorting algorithm? Not necessarily. What sorting algorithm to use depends strongly on the use case, though it tends to be some variant of quicksort or mergesort. There is no perfect sorting algorithm.

What language are we using?

Scala! (Pronounced Scah-laa)

Like Python and Ruby, it allows imperative and functional programming, as well as exposes a great REPL. In addition, unlike those languages, it’s compiled, statically-typed, supports pattern-matching, runs on the JVM, and interacts seamlessly with Java code.

If you’d like to learn Scala, I recommend Scala creator Martin Odersky’s Scala By Example. For a reference to quickly glance into if you just want to know something specific, I recommend Alvin Alexander’s Scala Cookbook.

Translating these algorithms into a language like Python should be trivial. I’ve chosen Scala for its expressiveness, and am reasonably confident that code of the same clarity would take more lines in any other language.

Before we begin

I assume the reader understands recursion and simple complexity theory, knows basic data structures, is good at using an object-oriented language, and knows a few basics of functional programming (e.g. what a map does).

The easiest way to test your sorting algorithms in Scala is to shuffle a range of numbers, then check if the sorting algorithm returns that original range. We can test if Scala’s built-in sorting method works properly like so.

val range = List.range(1, 1000000)
val shuffled = util.Random.shuffle(range)
assert(shuffled.sorted == range)

For algorithms that sort an array in-place, we can test like so.

val range = Array.range(1, 1000000)
val shuffled = util.Random.shuffle(range.toList).toArray



We’ll start with a fun, naive sorting algorithm that you’ve possibly seen in an intro CS course.

Bubblesort works by comparing each pair of elements in the array, then swapping them if the one on the right is larger. This “bubbles” the largest elements to the end of the array.

For example, array [3,1,4,2,6,5] will take two passes through the array to sort. After the first pass, we end up with [1,3,2,4,5,6], before one more pass gets us to [1,2,3,4,5,6].

def bubblesort(a: Array[Int]) = {
    for (i <- 0 to a.size - 1)
        for (j <- 0 to a.size - i - 2)
            if (a(j) > a(j + 1)) { 
                val t = a(j)
                a(j) = a(j + 1)
                a(j + 1) = t 

Bubblesort is a cool introduction to sorting because all it takes is a basic understanding of three simple concepts — arrays, loops and variable swapping. Personally, it was my first introduction to an algorithm that was actually useful. Or so I thought. With a time complexity of O(n^2) (notice that we traverse the array once for every element), it’s not useful outside introducing newbs to algorithms.


A divide-and-conquer algorithm is one where we divide our problem into recursive sub-problems whose solutions can then be combined to yield the final solution. Mergesort belongs to this class.

The key insights to understanding mergesort are that a singleton list (e.g. [1]) is already sorted, and that we can divide the problem of sorting a list into recursively sorting its sublists, then merging the sorted sublists into the sorted list. We shall see that merging sorted sublists is a task that can be accomplished in linear time.

Here’s an example of mergesort in action. Note that our recursive function calls produce a binary tree.

First, we recursively halve our list until we reach sorted singleton lists.

[3,1,4,2,6,5][3,1,4] [2,6,5][3] [1,4] [2] [6,5]

[3] [1] [4] [2] [6] [5]

Then the function goes back up the call stack, merging as it goes along.

[3] [1,4] [2] [5,6][1,3,4] [2,5,6][1,2,3,4,5,6]

Now let’s code this up. Let’s start by defining a merge function that takes two sorted lists and returns their combined sorted list.

When combining two sorted lists, we need only ever examine the head of each array. If one is less than the other, we remove the head and add that to the new list. At some point, one list will empty, so we simply add all elements of the remaining list to the final list, and we’re done.

def merge(x: List[Int], y: List[Int]): List[Int] = {
    if (x.isEmpty) { y }
    else if (y.isEmpty) { x }
    else if (x.head <= y.head) { x.head :: merge(x.tail, y) }
    else { y.head :: merge(x, y.tail) }

When combining two very large sorted lists, merge will run in linear time, but because our recursion is structured so that each merge call can’t return until future merge calls do, the space complexity is linear. We allocate a stack frame for every element in both lists, meaning larger lists are doomed to trigger a stack overflow.

We could make merge a messy iterative function, but our recursive solution is so elegant. Must we really sacrifice clarity on the altar of efficiency? Why, no! Luckily for us, Scala supports tail-call optimization, where stack frames aren’t allocated unless they need to be, so we can simply convert merge to use tail recursion!

def merge(acc: List[Int], x: List[Int], y: List[Int]): List[Int] = {
    if (x.isEmpty) { acc.reverse ++ y }
    else if (y.isEmpty) { acc.reverse ++ x }
    else if (x.head <= y.head) { merge(x.head :: acc, x.tail, y) }
    else { merge(y.head :: acc, x, y.tail) }

Now let’s implement mergesort, following the example we discussed.

def msort(l: List[Int]): List[Int] = {
    l.size match {
        case 1 => l 
        case x => merge(List(), msort(l.slice(0, x/2)), msort(l.slice(x/2, x)))

We can also implement mergesort iteratively with the help of a queue.

def msort(l: List[Int]): List[Int] = {
    var q = new collection.mutable.Queue[List[Int]]
    q ++= // A queue of singleton lists.
    while (q.size > 1) { q.enqueue(merge(List(), q.dequeue, q.dequeue)) }

Mergesort is one of several sorting algorithms that have an average run-time of O(n log n) (recursively halving the list forms a function call binary tree of depth log(n), and we execute the O(n) merge at each level), which is optimal for comparison-based sorting (check out Radix sort for a non-sorting algorithm that performs well when the input collection has many duplicates).


Quicksort was developed in 1959 by Tony Hoare. Like mergesort, it operates by recursively sorting and combining sublists.

However, in quicksort, we choose an element as the pivot, then partition the list into two lists — one of elements less than the pivot, another of elements greater than the pivot. An empty list simply returns itself. We recursively apply this algorithm and concatenate the results until we’ve sorted the list.

Here’s an example of quicksort in action, where we choose the head of the list as the pivot (in practice, we’ll generally want to choose the pivot at random).

Our first quicksort recursive function call looks like this —

quicksort([3,1,4,2,6,5])quicksort([1,2]) ++ [3] ++ quicksort([4,6,5])

Now, following this example, let’s implement quicksort.

def qsort(l: List[Int]): List[Int] = {
    l match {
        case Nil => List()
        case x :: xs => qsort(xs filter(x >=)) ++ List(x) ++ qsort(xs filter(x <))

While the function looks elegant (expressing our algorithm in two lines is always great), just test it. The overhead of creating and operating on a new singleton list for every element cripples our performance, and unlike mergesort, there’s no benefit to using a list. (Lists are best when a function relies heavily on recursively operating on the heads and tails of lists, like mergesort’s merge.)

Remarkably, we can perform quicksort in-place! All we need to do is switch to sorting an array and swap array elements in our partition instead of recursively allocating new lists. (Lists offer poor random-access time, so arrays are a better choice when we don’t need to change the size of the data structure.)

Let’s implement a partition function that takes an array and the indices denoting the left and right ends of the subarray we want to partition, partitions the subarray around a pivot chosen from its middle, then returns a tuple of the indices of the end of the left sub-array to partition next and the start of the right sub-array to partition next.

def partition(a: Array[Int], l: Int, r: Int): (Int, Int) = {
    def swap (i: Int, j: Int) = { val t = a(i); a(i) = a(j); a(j) = t }
    val pivot = a((l + r)/2)
    var (i, j) = (l, r)
    while (i <= j) {
        while (a(i) < pivot) i += 1
        while (a(j) > pivot) j -= 1
        if (i <= j) {
            swap(i, j)
            i += 1
            j -= 1
    (i, j)

The function above is a little hairy, so I recommend walking through it yourself on paper. Try analyzing the cases [3,4,1,2,6,5], [3,1,6,2,4,5], and [6,5,4,3,2,1] with l = 0 and r = 5. Note that partition is linear-time, and serves a similar function to mergesort’s merge.

Now, let’s implement an in-place quicksort recursively using our partition function. We can only partition a non-empty subarray.

def qsort(a: Array[Int], l: Int, r: Int): Unit = {
    if (r > l) {
        val(i, j) = partition(a, l, r)
        qsort(a, l, j)
        qsort(a, i, r)

Our recursive quicksort performs a post-order traversal of a function call binary tree. Can you see it? Try tracking the function calls and the sub-arrays they operate on with paper and pencil.

When we run the function, we pass in the entire array as the sub-array to recursively partition like so.

val a = Array(3,1,4,2,6,5)
qsort(a, 0, a.size - 1)

Because of Scala’s tail-call optimization, our recursive quicksort definition is perfectly efficient. But this isn’t the case in all languages, and stack frames are much more expensive than random memory, so let’s go ahead and implement quicksort iteratively with a stack. This is also a nice illustration of how recursive algorithms can be made iterative.

def qsort(a: Array[Int], l: Int, r: Int) = {
    var subArraysToPartition = collection.mutable.Stack[(Int,Int)]((l,r))
    while(!subArraysToPartition.isEmpty) {
        val (arrStart, arrEnd) = subArraysToPartition.pop
        if (arrEnd > arrStart) {
            val (subArrRightStart, subArrLeftEnd) = partition(a, arrStart, arrEnd)
            subArraysToPartition.push((arrStart, subArrLeftEnd))
            subArraysToPartition.push((subArrRightStart, arrEnd))


We’ll end our discussion with the joke of sorting algorithms, Bogosort.

We shuffle our given list. Is it sorted? If yes, we’re done! If not, shuffle again. We keep randomly permuting the array until we stumble across the correct ordering.

def bogosort(l: List[Int]): List[Int] = {
    val shuffled = util.Random.shuffle(l)
    if ((shuffled, shuffled.tail).zipped.forall(_ <= _)) { shuffled }
    else { bogosort(shuffled) }

This algorithm is so terrible that it doesn’t even have a worst-case running time. On any non-singleton input, it could potentially run forever, even on a supercomputer powered by all the resources of the universe! Bogosort proves that no matter how fast computers become, a sufficiently stupid algorithm can still ruin everything.

Created on November 24 2016, last modified on October 1 2017.