The Sorting Application
The examples in this document all part of the Java Application
Sorting.
Download the source archive
Sorting.zip
and install it as a
Java Application with Existing Sources. See the
Using NetBeans
document for details.
Like others, the
Sorting project has multiple Main Classes
intended to illustrate various independent features. The simplest way to
run the various classes
is to rightclick the file and select
Run File either in the
file itself or from the listing in the
Projects window.
Java sorting
The quickSort and mergeSort algorithms are both fast although
quickSort is considered the fastest. The mergeSort
has the added advantage of being stable.
Java uses both of them
for its sorting methodology:

For sorting primitive types, where stability is not a concern,
a variant of QuickSort is used:
DualPivotQuicksort. This modified algorithm
has the advantage of being faster as well as avoiding
the worstcase quadratic behavior of regular quickSort.

For sorting object (generic) types, where stability may be a conern,
Java uses a variant of MergeSort called ComparableTimSort,
itself a variation of the TimSort algorithm.
These algorithm variations attempt to gain speed in sorting realworld
situations where the data typically has many
"sequential runs" as opposed to being purely random.
Compare Quicksort and Java Sort
The closer an array is to being random, the better Quicksort does, and
the closer to being already sorted, the better Java sort does.
So the idea is to take a sorted array and make a "perturbation" by
replacing a range with random entries.
Sort the array both with Quicksort (from this document) and Java sort,
and compare the times.
Compare Mergesort and Java Sort
Same idea as the previous example, but using Mergesort instead of Quicksort.
QuickSort
The idea behind the quickSort algorithm is to choose, among the array
elements, one which will serve as a
pivot element used to split
the array into two parts:
The second phase calls quicksort recursively on the two parts.
We must ensure that the
i index satisfies
0 < i < n,
so that both parts are nonempty; otherwise,
quickSort would fall into infinite recursion!
The essence of quickSort's speed is to get the partitioning split
to be as close as possible
to the middle.
Quicksort partitioning algorithm
A key step is how the pivot is chosen. We do so by
sorting the elements at three positions:
a[from], a[center], a[to1] ( center = (from+to)/2 )
The
pivot is set to
a[center]
and the starting indices are as follows:
Having sorted the three key elements, we know before the
loop starts that
a[i] ≤ pivot and
pivot ≤ a[j].
Here is the partition algorithm pseudocode:
In the QuickSort splitting code, the indices
i and
j control the
splitting and the
i index ends up being the correct marker so that:
 for k < i: a[k] ≤ pivot
 for k ≥ i: a[k] ≥ pivot
Key points are that:

the while loops always move i up and j down
by at least one every time.

i and j indices stop whenever the element seen
is equal to the pivot. This seems unnecessary and inefficient;
however, in reality, it makes it more likely that the splitting
will be closer to optimal (equal halves).
After the partitioning is complete, we call the algorithm recursively
on both sides:
The key point of the partitioning which ensures that
this actually works is that, after completion:
from < i < to
Without this guarantee, one of the recursive calls would be empty
and the other would be to the entire array (possibly altered).
This is the initial importance of the seemingly inefficient
"<" comparison when comparing against the
pivot element.
Median of 3
The usage of the
medianOf3 helper function attempts to determine
the best possible choice of
pivot in constant time.
The closer these two parts are to being the same size, the
faster the algorithm will be.
The
medianOf3 function sorts the left, center, and right elements
and then
pivot is chosen to be the value of the center element.
In particular, in a common order pattern of being "nearly sorted,"
medianOf3 is likely to choose a good pivot element.
CUTOFF value
The quickSort splitting method,
in its usage of the
medianOf3 operation,
cannot be used for arrays of size 1 or 2.
In fact, it turns out to be best to avoid the recursive calls
on an array which is "too small."
Optimized quickSort implementations impose a
CUTOFF value
whereby all arrays of size less than or equal to this
will be sorted by an alternative method,
InsertionSort being a good choice.
The optimal CUTOFF value can be empirically
determined to be 9 or 10
by executing a main function counting operations in a range of possible values.
Quicksort Code
My version represents a slight deviation from the textbook's version.
My tests indicate that my version is slightly less costly than the
textbook's version which does two extra swaps in order to make the
right side have one less element.
Demonstrations
The
ShowSorts program is useful for giving demonstrations
of quickSort; simply set:
choice = "quick"
When quickSort is used the cutoff value is shown. You can easily
modify the value by using the setter method prior to the last
statement:
ShowAlgorithms.setQuicksortCutoff(6);
You can demonstrate of the effectiveness of the quickSort splitting method
by altering the generated values in the test array, e.g., try these:
public static int[] getIntArray(int size) {
...
A[i] = 1 + r.nextInt(2 * size); // random
// A[i] = 10; // all equal
// A[i] = 10 + i; // sorted
// A[i] = 2*size  i; // reverse sorted
...
}
Here is an example with 11 elements;
from=0 and
to=11,
Let the quicksort cutoff be 6 making it
so that partitioning is only done once.
You can create this by inserting the overwriting code:
// choose the array of certain size
int[] A;
int size = 15;
A = getIntArray(size);
// insert overwriting code
A = new int[]{1, 4, 6, 8, 5, 7, 10, 4, 1, 6, 6};
ShowAlgorithms.setQuicksortCutoff(6);
The run starts like this:
=> quickSort(0,11)
0 1 2 3 4 5 6 7 8 9 10
[1, 4, 6, 8, 5, 7, 10, 4, 1, 6, 6]
^ ^ ^
[1, 4, 6, 8, 5, 6, 10, 4, 1, 6, 7] pivot = 6
_ _
[1, 4, 6, 8, 5, 6, 10, 4, 1, 6, 7]
_ _ i = 2, j = 9
[1, 4, 6, 8, 5, 6, 10, 4, 1, 6, 7]
[1, 4, 6, 8, 5, 6, 10, 4, 1, 6, 7]
_ _ i = 3, j = 8
[1, 4, 6, 8, 5, 6, 10, 4, 1, 6, 7]
[1, 4, 6, 1, 5, 6, 10, 4, 8, 6, 7]
_ _ i = 5, j = 7
[1, 4, 6, 1, 5, 6, 10, 4, 8, 6, 7]
[1, 4, 6, 1, 5, 4, 10, 6, 8, 6, 7]
_ __ i = 6, j = 5
[1, 4, 6, 1, 5, 4, 10, 6, 8, 6, 7]
=> quickSort(0,6) = insertionSort
0 1 2 3 4 5
[1, 4, 6, 1, 5, 4]
[1, 1, 4, 4, 5, 6]
=> quickSort(6,11) = insertionSort
6 7 8 9 10
[10, 6, 8, 6, 7]
[6, 6, 7, 8, 10]
[1, 1, 4, 4, 5, 6, 6, 6, 7, 8, 10]
Best Time
Quicksort best case time is
O(n*log n).
The best case occurs when the partitioning always
separates the array into two "equal" halves. In this case the timing
function (counting comparisons only)
would satisfy a recurrence equation like this:
T(n) = n // compare each element against the pivot
+ 2 * T(n/2) // make 2 recursive calls, one on on each half
We have solved this type of recurrence equation in the initial part of the
course using induction. It should be direct to prove by induction that:
T(n) = O(n * log(n))
Despite differences these are all best cases:
 sorted array: the best of the best, no swaps necessary
 reverse sorted array: the first partition call needs to effectively reverse the array,
but after that, it's like sorted

all equals: every partition call will swap equal elements from the ends down to the center
The getIntArray method allows you to test each one. Set SIZE=16 and use this to let the
partitioning go a bit further:
ShowAlgorithms.setQuicksortCutoff(5);
Average Time
The average time of quicksort is also
O(n*log n).
The math is a bit more involved (see the textbook).
An important thing to point out is that the assumption
made by the proof
is that the partition split index,
i, is equally likely
to be
any position
1 < i < n1.
This is not true for our optimized
quickSort algorithm
using the medianofthree
pivot. In fact,
the medianofthree does better; the split will occur closer to the middle
on the average. The mathematical analysis becomes quite involved!
Worst Time
QuickSort worst case time is quadratic,
i.e.,
Ω(n^{2}),
which occurs when the splitting consistently leaves up to, say, up to
K elements in one portion or the other.
In that case the timing would satisfy something like this:
T(n) ≥ n // compare every element to pivot
+ T(nK) // sort the remaining part with at least nK elements
Expanding the recurrence, we get:
T(n) ≥ n + nK + n2*K + ...
It's not hard to prove that:
T(n) = Ω(n^{2})
Here is an example which you can test in
ShowSorts.
Add this code in the "variations" section, after the
getIntArray
call (thus overriding it):
A = new int[] { 1, 1, 2, 3, 4, 5, 6, 7, 3, 5, 2, 6, 4, 7 };
ShowAlgorithms.setQuicksortCutoff(2);
Sorting this array consistently leaves the 2 maximum equal elements in
the rightside split all the way down.
General example which has quadratic time
Arguing that Quicksort has a worstcase quadratic time means that we
have to be able to generate a sequence of examples, parameterized
by some variable
m,
each of which has quadratic time behavior. We will illustrate this idea
with the following program:
The program generates arrays based on the variable
m.
Setting
m = 5
creates the following array of size 31 to sort:
[10, 10, 10, 10, 10, 11, 12, 13, 14, 15, 11, 10, 12, 10, 13, 10, 14, 10, 15]
The partition sequence goes like this:
[10, 10, 10, 10, 10, 11, 12, 13, 14, 10, 11, 10, 12, 10, 13, 10, 14][15, 15]
[10, 10, 10, 10, 10, 11, 12, 13, 10, 10, 11, 10, 12, 10, 13][14, 14]
[10, 10, 10, 10, 10, 11, 12, 10, 10, 10, 11, 10, 12][13, 13]
[10, 10, 10, 10, 10, 11, 10, 10, 10, 10, 11][12, 12]
[10, 10, 10, 10, 10, 10, 10, 10, 10][11, 11]
The parameter
m represents roughly 1/4 the array size, and the first
m partitions have 2 elements on the right.
We can generalize this into a sequence of arrays of length
n
and give the following inequality
T(k) ≥ k + T(k2), (3*n)/4 ≤ k ≤ n
And so, expanding the recurrence, and using the lower bound
for
k, we get:
T(n) ≥ 3*n/4 + 3*n/4 + ... + 3*n/4, for n/4 terms
i.e.,
T(n) ≥ 3*n/4 * n/4 = 3/16 * n^{2}
namely,
T(n) = Ω(n^{2})
The following program generates these bad arrays and computes run times
for Quicksort using them.
It takes a parameter
m, determines the array size,
n,
computes both a random array of size
n and a "bad" array of size
n.
From these it runs the sorting algorithms
and compares 4 timing aspects:
 The time to do Quicksort on a random array of size n
 The time to do Java sort on a random array of size n
 The time to do Quicksort on the bad array of size n
 The time to do Java sort on the bad array of size n
Why we don't advance the i and j indices when the array element
equals the pivot
Let's take a closer look at our partiton algorithm:
It seems inefficient to stop expanding, say, the left area if
a[++i] == pivot.
Why don't we write:
while (a[++i] <= pivot) { }
Consider the "all equals" array. Making this change would make the
i index run off the right end of the array. Even if we modified
the code to stop
i before it reached the right end, the other
issue is that partition wouldn't create
equal halves;
the left side would be almost the entire array. In otherwords, in the
case of "all equals" this
best case would now become a
worst case.
Call on smaller subarrays first to minimize stack height
Consider our worst case example of an array with 15 entries. Assume
first that the recursive calls are lefttoright:
The call stack
behavior would look like this (showing the stack growing
down):
quickSort(12,14) 
quickSort(0,12) 
quickSort(12,14) 
quickSort(10,12) 
quickSort(0,10) 
quickSort(12,14) 
quickSort(10,12) 
quickSort(8,10) 
quickSort(6,8) 
quickSort(4,6) 
quickSort(2,4) 
quickSort(0,2) 
The implication is that we need
O(n) extra memory to store the stack.
But this is just an inefficient usage problem.
In order to keep the recursion stack height minimal, you have to
to make the recursive call on the
smaller subarray first.
Thus the recursive calls should look like this:
With this modification, we would have the following behavior in which
the calls to the 2element arrays complete immediately and we would
use
O(1) extra memory.
quickSort(0,12) 
quickSort(12,14) 
quickSort(0,10) 
quickSort(10,12) 
With this modification, the extra space usage (i.e., stack depth)
is
O(log(n)) which occurs in the bestcase
time scenario in which partition splits equally.
Consider this example of size 64:
quickSort(32,64) 
quickSort(0,32) 
quickSort(32,64) 
quickSort(16,32) 
quickSort(0,16) 
quickSort(32,64) 
quickSort(16,32) 
quickSort(8,16) 
quickSort(0,8) 
In our QuickSort demonstrations, we will ignore this code change.
It has no effect on the total running time because
all recursive calls must be made, regardless of the order
in which they are done.
MergeSort
The idea behind MergeSort is a simple application of recursion:
MergeSort vs QuickSort recursion
MergeSort exhibits
end order or
bottom up
recursion in that all the
significant actions take place
after the recursive calls.
This means that the array is split all the way down to singletons
(of size 1) and that all actions take place on the way out of recursion.
In contrast, QuickSort is exactly the opposite. It uses
top down
recursion, in the sense that the main action of partitioning is done
before the recursive calls.
Analysis
Let
T(n) be the worstcase number of comparisons and
data movements for an array (of size
n), we see that for
n > 1,
T(n) is made up of these counts:
 recursive call on the two halves of A: 2 * T(n/2)
 merge the halves of
A (in a stable manner): O(n)
and conclude that:
T(n) = 2 * T(n/2) + O(n)
The merge operation is like this:
0 1 n/2 n
A: [ x1, x2, x3, ... ] [ y1, y2, y3, ... ] ⇒ B [ ... ]
i j k
We proceed via a loop, terminating when both the
i and
j
indices have reached the end, i.e.:
if (i == n/2 && j == n) {
// you're done
}
Proceeding through the loop, we choose which element to copy into
the target array like this:
if (A[i] <= A[j]) {
B[k++] = A[i++];
}
else /* if (A[i] > A[j]) */ {
B[k++] = A[j++];
}
There are also boundary conditions which must be tested prior to these:
if (i == n/2) { // only left side pointer reached the end
B[k++] = A[j++];
}
else if (j == n) { // only rightside pointer reached the end
B[k++] = A[i++];
}
The arrays "halves" merged are either of equal size or the left side has one less:
n even: left and right both have n/2 elements
n odd: left has n/2 and right has (n/2)+1 elements
The fewest number of comparisons occur if the every element in the left side
is ≤ every element in the right side, in which case only
n/2
comparisons are needed. The most number of comparisons occur when the two
sides are "interleaved" like this, with n = 6:
[ 1, 3, 5 ] [ 2, 4, 6 ]
In this case the merge proceeds like this:
[ 1, 3, 5 ] [ 2, 4, 6 ]
* *
[ 1, 3, 5 ] [ 2, 4, 6 ] ⇒ [ 1,
* *
[ 1, 3, 5 ] [ 2, 4, 6 ] ⇒ [ 1, 2,
* *
[ 1, 3, 5 ] [ 2, 4, 6 ] ⇒ [ 1, 2, 3
* *
[ 1, 3, 5 ] [ 2, 4, 6 ] ⇒ [ 1, 2, 3, 4
* *
[ 1, 3, 5 ] [ 2, 4, 6 ] ⇒ [ 1, 2, 3, 4, 5,
* *
[ 1, 3, 5 ] [ 2, 4, 6 ] ⇒ [ 1, 2, 3, 4, 5, 6 ]
* *
All but the last step required a comparison. The most comparisons
required is
n1. We conclude that:
T(n) ≤ 2*T(n/2) + n1
We have seen this recurrence many times before and know the solution is
O(n*log n).
Significance of MergeSort
Thus the significance of MergeSort is that:
 its worst case time is
O(n*log n).
 it is stable
The only possible down side of MergeSort is that it requires a
spare array in order to do the merge operations.
Efficient usage of merge target
Our first inefficient approach is think of
B as always
being the
merge target.
 If the size of A is 1,
copy A into B.
 If the size is greater than 1,
call mergeSort
recursively to sort left half of A, A1,
into the left half of B, B1. Same with the right
half A2 into B2.
 Copy B back into A. A now consists
of two sorted halves.

Merge the sorted halves (A1,A2) into B.
At the completion of the algorithm, we would have to copy
B back to
A.
An efficient implementation of this algorithm does fewer data
movements by having
A and
B switch "roles" where one is the merge source and
the other the merge target.
In this way we completely avoid the copy step (b).
In any case, we want the final
merge target to be
A.
As we go into the recursion, the merge target alternates:
A ← B ← A ← B ← ... ← (base of recursion: A or B)
This requirement means that when we get to the base case
with arrays of size 1, we require that

if B is the merge target: copy (the single element)
from A to B.

if A is the merge target: do nothing, the element is already in
A.
A = 
17  16  15  14 
13  12  11  10 
(base of recursion) 
↓ (copy elements) 
B = 
17  16  15  14 
13  12  11  10 
↓ (merge 1's to 2's) 
A = 
16  17  14  15 
12  13  10  11 
(sorted 2's) 
↓ (merge 2's to 4's) 
B = 
14  15  16  17 
10  11  12  13 
(sorted 4's) 
↓ (merge 4's to 8's) 
A = 
10  11  12  13 
14  15  16  17 
THE GOAL (sorted 8's) 
A =  25  24  23  22  21  20 
19  18 
17  16 
15  14  13  12  11  10 
(base of recursion — do nothing)

↓ (merge 1's to 2's) 
B = 
24  25  22  23 
20  21  18  19 
16  17  14  15 
12  13  10  11 
↓ (merge 2's to 4's) 
A = 
22  23  24  25 
18  19  20  21 
14  15  16  17 
10  11  12  13 
↓ (merge 4's to 8's) 
B = 
18  19  20  21 
22  23  24  25 
10  11  12  13 
14  15  16  17 
↓ (merge 8's to 16's) 
A = 
10  11  12  13 
14  15  16  17 
18  19  20  21 
22  23  24  25 
THE GOAL 
The MergeSort Code
In order to manage the merge target array, a
boolean variable
target_is_a is passed into
the recursive calls. This boolean indicates what to do
at the merge step:
target_is_a = true: A is the merge target, B is the source
= false: B is the merge target, A is the source
The initial call assigns
target_is_a to
true,
indicating that we want the final merge to go into to the
A array.
For each recursive step the boolean is
negated,
toggling between
true and
false
all the way down to the base case of a single element.
When we get to the base case, a singleton array,
if
target_is_a is
true, i.e., the element must end up in
A, then there is nothing to do because it is already there;
otherwise the element must be copied from
A into
B.
Here is the actual code used:
Stability
In order to achieve stability, care must be taken in the merge operation.
All equal elements in the sorted halves will be grouped together in correct order
(this is an inductive assumption). If
i and
j are pointing to equal elements,
we ensure that the leftside element, i.e., the one with index
i,
goes into the target array first:
Best/worse case number of comparisons
MergeSort will always have the same number of data moves for
arrays of equal sizes because the data is simply being
moved between source and target arrays for each of the "steps" of merge.
In terms of comparisons, the best case
for a single merge is when the largest in one half is less
than or equal to the smallest in the other half, like this array of size 16:

18  19  20  21 
22  23  24  25 

10  11  12  13 
14  15  16  17 
↓


10  11  12  13 
14  15  16  17 

18  19  20  21 
22  23  24  25 
With
8 comparisons, the "smaller" half (of size 8) goes
first and then "larger" half fills out the array
with no additional comparisons necessary.
Count roughly
n/2 comparisons for merging the
halves of an array of size
n
The worst case is when the array elements are shuffled like this:
10  12  14  16 
18  20  22  24 

11  13  15  17 
19  21  23  25 
The shuffled structure will force comparisons between halves
for each of the 15 pairs.
Count roughly
n1 comparisons to merge the halves on
an array of size
n,
approximately twice as many comparisons as the best case.
Working our way down with this shuffling strategy,
a worst case array of size 16 would be:
10  18  14  22 
12  20  16  24 
11  19  15  23 
13  21  17  25 
— then proceeding with the sort — 
10  18  14  22 
12  20  16  24 
11  19  15  23 
13  21  17  25 
8 * 1 = 8 comparisons (but no changes)

10  14  18  22 
12  16  20  24 
11  15  19  23 
13  17  21  25 
4 * 3 = 12 comparisons

10  12  14  16 
18  20  22  24 
11  13  15  17 
19  21  23  25 
2 * 7 = 14 comparisons

10  11  12  13 
14  15  16  17 
18  19  20  21 
22  23  24  25 
15 comparisons
