Algorithm Analysis

The Analysis Application

The examples in this document are all part of the Java Application Analysis. Download the source archive Analysis.zip and install it as a Java Application with Existing Sources. See the Using NetBeans document for details.

Like others, the Exceptions project has multiple Main Classes intended to illustrate various independent features. The simplest way to run the various classes is to right-click the file and select Run File either in the file itself or from the listing in the Projects window.

Linear Search

For example, consider simple linear search of an integer array for a given integer key. The following class representing a simple random linear search might be something like this:
search.LinearSearch1  

Coding

In order to create a general version which could be reused, we should consult what Java does in the Arrays class:
Arrays
There is no linear search algorithm, but there is a binary search. Here are two relevant static functions:
int binarySearch(int[] a, int key)
int binarySearch(int[] a, int from, int to, int key)
The "toIndex" argument name is misleading since the range is really between fromIndex and toIndex- 1, inclusive.
...fromfrom+1 ... to-1to...
Thus these two calls are equivalent:
java.util.Arrays.binarySearch(A, key);
java.util.Arrays.binarySearch(A, 0, A.length, key);
The int return value is supposed to be the position at which the key is found. If not found, a negative value is returned.

These algorithm specifications suggest a more general way to write linear search, namely using our own package, util, and our own class MyArrays with two versions:
util.MyArrays.linearSearch(int[] a, int key)
util.MyArrays.linearSearch(int[] a, int from, int to, int key)
We should also consider what to do if the latter of these calls is provided with invalid arguments such as these:
linearSearch(A, -3, 2, 15);
linearSearch(A, 3, 2, 15);
In these cases, we want to throw an Exception. A few tests of Arrays.binarySearch leads us to add this starter code:
if (from < 0 || from > to) {   // implies that to >= 0
  throw new IllegalArgumentException();
}
A general algorithm linear search for integers can be found here:

util.MyArrays
package util;
 
public class MyArrays {
 
  public static int linearSearch(int[] A, int from, int to, int key) {
    if (from < 0 || from > to) {
      throw new IllegalArgumentException();
    }
 
    int i;
    for (i = from; i < to; ++i) {
      if (key == A[i]) {
        return i;
      }
    }
    return -1;
  }
 
  public static int linearSearch(int[] A, int key) {
    return linearSearch(A, 0, A.length, key);
  }
}
Here, we apply this search algorithm in the following main class:
search.LinearSearch2  

Other Java types

Consider expanding the linear search algorithm to other types such as double or String. These two types are very different. The type double, like int is a primitive type and String is an Object. With respect to double, or any of the other primitive types, we have to write separate functions for each even though they effectively do the same thing.

Regarding Object types, such as String, we would want a version like this:
package util;
 
public class MyArrays {
 
  // ...
 
  public static int linearSearch(Object[] A, int from, int to, Object key) {
    if (from < 0 || from > to) {
      throw new IllegalArgumentException();
    }
 
    int i;
    for (i = from; i < to; ++i) {
      if (key.equals(A[i])) {
        return i;
      }
    }
    return -1;
  }
 
  public static int linearSearch(Object[] A, Object key) {
    return linearSearch(A, 0, A.length, key);
  }
}
The problem with creating an example is how to "meaningfully" generate an array of Strings and a search key. A simple example (not terribly meaningful) is this:
LinearSearch3  

null values

We have implicitly assumed that the search key is not null; otherwise, this would cause an exception:
key.equals(A[i])
We could reverse it as:
A[i].equals(key)
but then the implicit assumption is that the array has no null elements. A complete solution might be to treat key == null as a special case:
int i;
if (key == null) {
  for (i = from; i < to; ++i) {
    if (A[i] == null) {
      return i;
    }
  }
} 
else {
  for (i = from; i < to; ++i) {
    if (key.equals(A[i])) {
      return i;
    }
  }
}
return -1;

Abstract time for linear search

Regarding the notion of "abstract time" we usually say to "count the number of comparisons" to get a measure of the actual time. There are several deviations from reality: Nevertheless, simplifications such as this are useful for algorithm analysis because we can more easily go on and describe the behavior for arbitrarily large arrays.

Analysis of Linear Search

The worst case is if the element is not found or is found at the end of the search in which case there are n comparisons. The best case is that the thing you're looking for is in the first slot and so there's only one comparison. We say:

Average case for linear search

A simple guess might be to take the average of the best and worst, getting (n+1)/2. This answer turns out to be correct, but we must derive it more methodically.

First of all, we need to make the assumption that the search is successful. Why? Consider an array of size 10, holding arbitrary integers and the key is an arbitrary integer. Given that there are about 4 billion 32-bit integers, the probability that our key is one of these 10 is effectively zero! In addition, we will assume that the key is equally likely to be found at each of the n positions.

Using this notion, the average cost is:
( total cost for finds at all positions )  /  number of positions
The cost for finding at position i is (i+1), for values i = 0 ... n-1. Therefore we derive:
1 + 2 + ⋅⋅⋅ + n n*(n+1)/2 n + 1
average cost =
=
=
n n 2

Language of Asymptotics

Asymptotics has to do with describing the behavior of functions, specifically algorithmic timing functions, for arbitrarily large problem sizes. Often a problem size can be characterized by a single parameter n, e.g., the size of an array.

Big-O Notation

We would really like to present information about this algorithm which ignores the constants. The "big O notation" and other complexity terminology allows us do precisely that. We say:
T(n) = O(f(n))   or   T(n) is O(f(n))

  if there are positive constants C, k, 

such that T(n) ≤ C * f(n) for all n ≥ k
The big-O notation is simply a convenience for expressing the relation between an unknown "timing function", T(n), and a known reference function, f(n). In reality O(f(n)) is a class of functions, of which T(n) is a member. Nevertheless, it is convenient to be able to make statements like this
"the worst case time for such-and-such-algorithm is O(n*log(n))"
and have a rigorous basis for what you're saying.

The idea of "n ≥ k" in the definition means eventually, i.e., if we ignore some initial finite portion. For example, suppose
T(n) = n + 100
We can say:
T(n) ≤ 101 * n, for all n ≥ 1 
but we can also say
T(n) ≤ 2 * n, for all n ≥ 100
getting a "better" asymptotic constant, C, in the sense that it is smaller (we can always make it larger). In either case we have proved that
n + 100 = O(n)
by finding constants C and k which make the definition statement work.

Linear Search in big-O terms

Going back to linear search we observed that when counting comparisons:
the best case is 1
the worst case is n
the average case is (n+1)/2 = ½ n + ½
In big-O terminology, we would say this about linear search:
the best case time is O(1)
the worst case time is O(n)
the average case time is O(n)

Binary Search

Binary search searches a sorted array in the most efficient way possible. This algorithm employs a simple example of a divide-and-conquer strategy in which we subdivide the problem into equal-sized "sub-problems". The idea is simple: compare the key to the "middle" element, if not equal, either look left or look right portion based on whether the key is less than, or greater than, the middle element.

First Binary Search Implementation

This algorithm expresses itself most naturally in a recursive manner based on the way I say:
search the whole array invokes search of one half or another.
In order to express the recursive nature, the parameter of the algorithm must allow arbitrary beginnings and ends. Our initial coding might look something like this:
int binarySearch(int[] A, int from, int to, int key) {
  if (from == to) { 
    return - 1; 
  }
 
  int mid = (from + to) / 2;
  if (key == A[mid]) {
    return mid;
  }
  // else
  if (key < A[mid]) {
    return binarySearch(A, from, mid, key);
  }
  // else  (key > A[mid])
  return binarySearch(A, mid+1, to, key);
}
The else keywords are optional because of the return statements. One has to keep in mind that the division used to compute mid is integer division (i.e., fractional truncation).

In diagrams, searching for key in array, within the given range:
from...mid ... to
Check array[mid] where mid = (from+to)/2:
from...mid
key?
If key < array[mid], search left half (excluding mid):
from...mid
If key > array[mid], search right half (excluding mid):
from...midmid+1 ...to

As is the case with linearSearch, the second range parameter, toIndex, is not included in the search range. Our textbook makes, in my opinion, the unfortunate decision to make the second parameter position in his array-based algorithms the rightmost index, so beware.

Binary Search Visitation tree

Lay out the array positions that the algorithm would visit in a binary tree, where the root is the middle of the array, the left and right subtrees are generated by searches of the left and right subarrays, respectively. Here are the binary search trees for arrays of size 7 and 10, resp.:
Note the "left-leaning" aspect of the latter tree. This reflects the fact that our algorithm, when it cannot split the array exactly in half, will have one more element on the left size compared to the right side.

Algorithmic correctness by induction

The proof uses a form of induction called strong induction, whereby we assume true the thing we want to prove for all values (within a suitable range) up to that point. Consider the execution of: The execution of
int pos = binarySearch(A, from, to, key) 
We want to say that: The proof is by induction on the array size, len = to - from. It is a good idea to run a few examples by hand. Again, keep in mind that computer integer division is truncated. In mathematical terms, division of two integers with integer result:
a/b = floor(a/b)
where floor which truncates any decimal part.

The base case is when len = 0. This means that to == from and the array is empty. The key cannot be present in the array and the algorithm indicates failure.

Inductively, suppose len ≥ 1 and the argument has been proved for values less than len. Because the array is sorted, it is obvious that the algorithm will work so long as: The argument uses regular (non-truncated) division and so we have to consider even and odd values of len separately. Write:
to = from + len
and compute:
mid = (from + to)/2 = (2*from + len)/2
    = from + len/2,      if len is even
    = from + (len-1)/2,  if len is odd
The left side length is
mid - from = 
    = len/2,      if len is even
    = (len-1)/2,  if len is odd
In both cases
mid - from < len

The right side length is
to - (mid + 1) 
   = (from + len) - mid - 1 
   = len - (mid - from) - 1

if len is even:
   = len - len/2 - 1 = len/2 - 1 
   
if len is odd:
   = len - (len-1)/2 - 1 
   = (len-1) - (len-1)/2 = (len-1)/2
Again, in both cases
to - (mid + 1) < len

Summarizing, we see that

Logarithms in analysis

Logarithms, particularly base-2 logarithms, are important because they represent the number of times n things can be halved. In particular, there are roughly log2n term in this sequence:
n, n/2, n/4, ..., 1
The definition of logarithm to the base b is:
x = logb(n)    means   bx = n, i.e., blogb(n) = n
The most common bases are these: Logs of all bases differ by a constant factor.
blogb(2) = 2
(blogb(2))x = blogb(2)*x = 2x
Using x = log2(n),
blogb(2) * log2(n) = 2log2(n) = n
This means, according to what the definition of log:
logb(n) = logb(2) * log2(n) (think of the 2's as "canceling out"), or simplifying,
log2(n) = logb(n) / logb(2)
Often the base-e logarithm is written as ln(n) to clarify the base. In computer language library functions, log(n) often means the natural logarithm and base-2 logarithm is usually written with an explicit base. For example, in Java, the function Math.log is the natural logarithm, but we can easily write the base-2 logarithm:
static double log_2(double x) { 
  return Math.log(x) / Math.log(2); 
}
For our purposes, since the base-2 is preeminent, we'll drop the 2 and assume for simplicity that:
log(n) = log2(n)

Integer logarithms

In computational settings in which logarithms appear, they always appear in these integer formats:
flr(log(n))  = the largest power of 2 so that 2power ≤ n
ceil(log(n)) = the smallest power of 2 so that n ≤ 2power
For example,
flr(log(8)) = 3, ceil(log(8)) = 3
flr(log 10) = 3, ceil(log 10) = 4

Binary Search Worst Case

Let T(n) = worst case number of comparisons in binary search of an array of size n

From an analysis perspective, when binary search is used with an array of size n > 1, the sizes of the two halves are:
n even => left side n/2, right side n/2 - 1 
n odd  => both sides have size (n-1)/2 = n/2
In the worst case, we would end up consistently going to the left side with n/2 elements. Couting 1 for the middle element comparison we get this recurrence equation for the worst-case number of comparisons:
T(1) = 1
T(n) = 1 + T( n/2 ), n > 1
Our claim is that T(n) = O(log(n)). In fact we want to prove:
T(n) ≤ 2 * log(n), n ≥ 2
Look at a comparison of some computed values of T(n) and log(n)
T(2) = 2,  log(2) = 1
T(3) = 2,  log(3) = 1.x
T(4) = 3,  log(4) = 2
T(5) = 3,  log(5) = 2.x
Observe that the multiplier 2 satisfies the inequality we claim for each value.
Proof by induction.
We use so-called strong induction. We have verified that for base cases n = 2, 3, 4, 5 that:
T(n) ≤ 2 * log(n), n ≥ 2
Assume valid up to (but not including) n. This means that we can make the inductive assumption and assume this to be true:
T(n/2) ≤ 2 * log(n/2),  n/2 < n
Then the proof goes like this:
T(n) = 1 + T(n/2)             (the recurrence) 
     ≤ 1 + 2 * log( n/2 )     (substitute from inductive assumption)
     = 1 + 2 * (log(n) – 1)   (properties of log)
     = 2 * log(n) – 1         (simple algebra)
     ≤ 2 * log(n)             (becoming larger)
The key algebraic step relies on the property:
log(a/b) = log(a) – log(b)
which we are using like this:
log(n/2) = log(n) – log(2) = log(n) – 1
However, because n/2 is truncated division, this last statement is not technically correct when n is odd. For simplicity, we'll ignore this technicality.

Demo program

It's useful to see a programmatic comparison of T(n) and log(n) as can be done with the following program:

TimeBS
package search;
 
public class TimeBS {
  static int T(int n) {
    if (n == 1 ) {
      return 1;
    }
    return 1 + T( n/2 );
  }
 
  // base-2 logarithm
  static double log(double x) {
    return Math.log(x) / Math.log(2);
  }
 
  public static void main(String[] args) {
    for (int n = 2; n <= 32; ++n) {
      int t_val = T(n);
      double log_val = log(n);
      System.out.printf("T(%s)/log(%s)\t%.3f\n", n, n, t_val/log_val);
    }
  }
}
A run yields values like this which indicate that T(n) and log(n) get closer for larger values of n, although not in a uniform sense in that powers of 2 "stand out" in the table.
T(2)/log(2)	2.000
T(3)/log(3)	1.262
T(4)/log(4)	1.500
T(5)/log(5)	1.292
T(6)/log(6)	1.161
T(7)/log(7)	1.069
T(8)/log(8)	1.333
T(9)/log(9)	1.262
T(10)/log(10)	1.204
T(11)/log(11)	1.156
T(12)/log(12)	1.116
T(13)/log(13)	1.081
T(14)/log(14)	1.051
T(15)/log(15)	1.024
T(16)/log(16)	1.250
T(17)/log(17)	1.223
T(18)/log(18)	1.199
T(19)/log(19)	1.177
T(20)/log(20)	1.157
T(21)/log(21)	1.138
T(22)/log(22)	1.121
T(23)/log(23)	1.105
T(24)/log(24)	1.091
T(25)/log(25)	1.077
T(26)/log(26)	1.064
T(27)/log(27)	1.052
T(28)/log(28)	1.040
T(29)/log(29)	1.029
T(30)/log(30)	1.019
T(31)/log(31)	1.009
T(32)/log(32)	1.200

Improvement on the worst-case order constant

In the above sections we proved that
T(n) ≤ 2 * log(n),  n ≥ 2
Can we do better? It is not true that T(n) ≤ log(n) per se, but the proof above can easily be adpated to prove this inequality:
T(n) ≤ log(n) + 1,  n ≥ 2
Thus, including the extra "1" term the constant multiplier is effectively 1, which is an improvement over the multiplier 2 indicated above.

Order class hierarchy

The functions which characterize algorithm timing tend to fall into a few common ones:
O(1)           — constant time
O(log n)       — logarithmic time
O(n)           — linear time
O(n * log n)
O(n2)          — quadratic time
O(n2 * log n)
O(bn)          — each base b generates a distinct order class
These order classes are upwardly inclusive, i.e., if T(n) = O(log(n), then of course, T(n) = O(n). We're usually interested in the "best fit" in the sense of finding the smallest order class to which T(n) belongs. In order to characterize the "best fit" of an order class, we need two other notions:

Lower bound: Ω

We say:
T(n) = Ω(f(n))  (or T(n) is Ω(f(n)))
if there are positive constants C and k, such that
T(n) ≥ C * f(n),  for all n ≥ k

Exact bound: Θ

We say:
T(n) = Θ(f(n))  (or T(n) is Θ(f(n)))
if
T(n) = O(f(n))  and  T(n) = Ω(f(n))
This means that there are positive constants C1, C2 and k, such that
C1 * f(n)  ≤  T(n)  ≤  C2 * f(n),  for all n ≥ k
The Θ concept gives the precise sense to the notion of "order class" because it completely characterizes the behavior of a timing function relative to a reference function up to a constant multiple.

Order Summary

Officially there are three considerations. Unofficially, the big-O terminology dominates the discussion in algorithmic analysis. Authors commonly use O even when they really mean Θ. If the exact order class is not known it means that a complete mathematical understanding of the run-time behavior is lacking.

Other algorithmic terminology

Asymptotic dominance of one function by another is expressed by the little-o notation:
T(n) = o(f(n))
This means that for every c (no matter how small), there is a k such that
T(n) ≤ c * f(n), n ≥ k
For the most part this means the following:
limn ⇾ ∞ T(n)/f(n) = 0
Asymptotic dominance expresses the relationship of the reference functions in the order class hierarchy above:
1      = o(log(n))
log(n) = o(n)
n      = o(n2)
n2     = o(2n)
2n     = o(3n)
...
The first of these relations, log(n) = o(n), is proved using L'Hôpital's rule from calculus, substituting a continous variable x for the integer n:
limx ⇾ ∞  log2(x)/x 
= limx ⇾ ∞  log2(e) * ln(x)/x 
= limx ⇾ ∞  log2(e) * ln(x) / x
= limx ⇾ ∞  log2(e) * 1/x / 1 
= limx ⇾ ∞  log2(e) / x  =  0
Asymptotic equality is written this way:
T(n) ≈ (f(n))
and it means: limn ⇾ ∞ T(n)/f(n) = 1 The "wavy" equal lines suggest that these two functions are essentially the same for large values. For example, in a polynomial function, we can effectively ignore all but the highest order term. For example, if
T(n) = 100 * n + 200 * n2 + 3 * n3
then
T(n) ≈ 3 * n3
Unfortunately the Weiss textbook does not define this relation. Asymptotic equality is, in some sense, similar to the exact bound Θ, except that it gives a precise order constant, which is often of interest when you want to compare two timing functions within the same order class. For example, let
W(n) = worst case time for linear search
A(n) = average case time for linear search
Both functions are exactly linear time and we would write:
W(n) = Θ(n)
A(n) = Θ(n)
However, the order constants are different and this is expressed using :
W(n) ≈ n
A(n) ≈ ½ n

Binary Search average case

The average case timing is more complicated. As in the case of linear search, we assume a successful search, and that each of the array positions are equally likely to hold the search key. We are mostly interested in getting some sense about how much better the average case might be, and proving that it is still logarithmic. Technically we want to prove the lower bound:
Average binary search time(n) = Ω(log(n))   
Combined with the fact that the average time can only be better than the worst-case time, which is O(log(n)) we can then conclude that
Average binary search time(n) = Θ(log(n))   
Additionally we want to get some idea about what the order constant might be.

Counting the total number of comparisons

In order to compute the average number of comparisons, we need to find a way to compute the total number of comparison for all possible nodes in the positional visitation tree. The level of a node is its distance from the root. The root, at level 0, counts for 1 comparison. Both of its children count for 2 comparisons each, etc. Thus,
Total comparisons = (all nodes at level i) (i+1)
In general, at level i, if it is full, there will be 2i children, each contributing (i+1) comparisons.

A binary tree is perfect if every level is full. In general, the binary search position visitation tree will not be perfect, but it can be argued that We reproduce the depictions of the binary search visitation trees for arrays of size 7 and 10:
flr(log(7)) = 2 flr(log(10)) = 3

Average number of comparisons for perfect trees

We rely on a known formula, which gives an expression when counting comparisons in a perfect tree with L levels:
Comparisons(L) 
  = 1*20 + 2*21 + 3*22 + ... + L*2L-1
  = ( L – 1 ) * 2L + 1 
A perfect visitation tree occurs for the values n = 1, 3, 7, 15, ... which are powers of 2 minus 1. In these cases, the visitation tree has exactly L = log(n+1) levels. Using the above formula, we compute the total number of comparisons as:
  = ( log(n+1) - 1 ) * 2log(n+1) + 1 
  = ( log(n+1) - 1 ) * (n + 1) + 1
  = n * log(n+1) - n + log(n+1) - 1 + 1
  = n * log(n+1) - n + log(n+1)
Compute the averge by dividing by n:
average = log(n+1) - 1 + log(n+1)/n

Averge comparisons lower bound

In general, the visitation tree is not perfect. To get a lower bound, simply omit all nodes on the last level. So we compute comparisons for a total of L = flr(log(n)) levels. Using the same formula, we get
total comparisons 
  ≥ Comparisons( flr(log(n)) ) 
  = ( flr(log(n)) - 1 ) * 2flr(log n) + 1 
The flr(log(n)) expression truncates the decimal part of log(n) and so it will subtract away less than 1 from log(n), i.e.,
flr(log(n)) ≥ log(n)-1
Replacing flr(log n) by log(n)-1 and doing the algebra, we get
total comparisons 
  ≥ ( log(n) - 2 ) * 2log(n) - 1 + 1
  = log(n) * 2log(n) - 1 - 2log(n) + 1  
  = ½ * n * log(n) - n + 1
Dividing this total by n gives the average, i.e.,
average comparisons 
  ≥ ½ * log(n) - 1 + 1/n 
  > ½ * log(n) - 1 
From this we can say:
The average number of comparisons is Ω(log(n)).
We also know that average number must be less that the worst, which we know is
worst case comparisons ≤ log(n) + 1
Therefore, the average number of comparisons for any n is somewhere between:
½*log(n) - 1  and  1*log(n) + 1

Binary Search Implementations

First of all, create a simple test program to see what Java does. Create the main class JavaBinarySearch with this content:

JavaBinarySearch
package search;
 
import java.util.Arrays;
import java.util.Random;
 
public class JavaBinarySearch {
 
  public static void main(String[] args) {
 
    Random rand = new Random();
 
    int A[] = new int[20], key;
 
    for (int i = 0; i < A.length; ++i) {
      A[i] = rand.nextInt(A.length * 2); 
    }   
    key = rand.nextInt(A.length * 2); 
 
    System.out.println("A (initial) = " + Arrays.toString(A));
    Arrays.sort(A);
    System.out.println("A (sorted)  = " + Arrays.toString(A));
    System.out.println("key = " + key);
 
    int pos = Arrays.binarySearch(A, key);
    System.out.println("pos = " + pos);
  }
}
When you run this, observe the return value for failed searches as well as successful searches. Our new version should be this:
public static int binarySearch(int[] A, int from, int to, int key) {
  if (from == to) {
    return - from - 1;
  }
 
  int mid = (from + to) / 2;
  if (key == A[mid]) {
    return mid;
  }
  if (key < A[mid]) {
    return binarySearch(A, from, mid, key);
  }
  // else key > A[mid]
  return binarySearch(A, mid+1, to, key);
}
 
public static int binarySearch(int[] A, int key) {
  return binarySearch(A, 0, A.length, key);
}

Failed search value

Of what use is the negative return value? We have modified the return value in case of failure. Before we wrote simply
return -1;
However, we can provide better information which indicates the position that the key should be added. This is done by:
return - fromIndex - 1;
We are insured that this value is negative under all circumstances. In this case, when the key is not found the negative return value can be used to compute the correct insert position as:
int insert_pos = - ret - 1;
We can then correctly add key to the array, A (maintaining sortedness) if we It's good to verify this with an example. Suppose the array is
010203040
01234
Consider the failed search for key=25. The successive search ranges would be:
(0,5), (3,5), (3,4), (3,3)
The return value would be -4 which would be correctly interpreted as an insert position of 3. It's also important to ensure that the "extreme" cases work out correctly:

Error Checking

We've left out error checking code from binarySearch:
public static int binarySearch(int[] A, int from, int to, int key) {
  if (from < 0 || from > to) {
      throw new IllegalArgumentException();
  }
  //...
}
The problem with introducing the error checking code inside the recursive function is that it is inefficient; only the first invocation needs to check for an invalid range, not all the recursive calls. A better approach is to have the public binarySearch function call a private recursive function like this:
public static int binarySearch(int[] A, int from, int to, int key) {
  if (from < 0 || from > to) {
    throw new IllegalArgumentException();
  }
  return _binarySearch(A, from, to, key);
}
 
private static int _binarySearch(int[] A, int from, int to, int key) {
  if (from == to) { 
    return - from - 1; }
 
  int mid = (from + to) / 2;
  if (key == A[mid]) {
    return mid;
  }
  if (key < A[mid]) {
    return _binarySearch(A, from, mid, key);
  }
  // else key > A[mid]
  return _binarySearch(A, mid+1, to, key);
}

Comparable objects

The situation is more complicated than that of linear search for object types because binary search is based on the ability to make comparisons between objects obj1 and obj2 in order to establish: The ability to achieve this is called comparability, and a class which can compare itself implements the Comparable interface, expressed in generic terms like this:
class MyClass implements Comparable<MyClass>
This generic interface requires the definition of the one public member function:
int compareTo(MyClass other)
This is the case for String, Integer, Double, etc., allowing the member function call between two MyClass objects: Requiring that only comparable classes be used can done in two ways:
  1. Require the classes to be comparable at compile time as suggested by the Weiss textbook:
    public static <T extends Comparable<? super T>> int binarySearch(
      T[] A, int from, int to, T key) 
    {
      if (from == to) {
        return - from - 1;
      }
     
      int mid = (from + to) / 2;
     
      int comp = key.compareTo(A[mid]);
     
      if (comp == 0) {
        return mid;
      }
      if (comp < 0) {
        return binarySearch(A, from, mid, key);
      }
      // else comp > 0
      return binarySearch(A, mid+1, to, key);
    }
  2. Require the classes to be comparable at run time. This is what is done in the Java binarySearch:
    public static int binarySearch(Object[] A, int from, int to, Object key) {
     
      if (fromIndex == toIndex) {
        return - from - 1;
      }
     
      int mid = (from + to) / 2;
     
      int comp = ((Comparable) key).compareTo(A[mid]);
     
      if (comp == 0) {
        return mid;
      }
      if (comp < 0) {
        return binarySearch(A, from, mid, key);
      }
      // else comp > 0
      return binarySearch(A, mid+1, to, key);
    }
The complex type expression which says "type T is comparable" is this:
<T extends Comparable<? super T>>
It means that "T is a subclass of a comparable class."

Using Comparators

In many cases we want to compare objects of a class which is not innately comparable by dictating the manner in which the objects are to be compared. This achieved by passing an external comparator object, i.e., one which implements this generic interface:
interface Comparator<MyClass>
The requirement of this interface is to implement the function:
int compare(MyClass first, MyClass second)
The Java API for binarySearch supports the ability to specify user-defined comparison by passing a comparator argument:
public static <T> int binarySearch(
        T[] A, int from, int to, T key, Comparator<? super T> c) {
 
  if (fromIndex == to) {
    return -from - 1;
  }
 
  int mid = (from + to) / 2;
 
  int comp = c.compare(key, A[mid]);
 
  if (comp == 0) {
    return mid;
  }
  if (comp < 0) {
    return binarySearch(A, from, mid, key, c);
  }
  // else comp > 0
  return binarySearch(A, mid + 1, to, key, c);
}
The parameter type
Comparator<? super T> c
indicates that the comparator may technically be defined on a superclass.

Reviewing the above 3 versions of binary search code, the only differences are in how the key comparison is made:
int comp = key.compareTo(A[mid]);

int comp = ((Comparable) key).compareTo(A[mid]);

int comp = c.compare(key, A[mid]);

Binary vs. Linear search

The problem with binary search is that, although the search time is much faster, the array must be sorted for it work. The best algorithms for sorting a random array have a run time of O(n * log n). So there is no advantage of binary search over linear search if every search is on a fresh array. Here is how we compare the two algorithms:
Linear Search Binary Search


create array: O(n) O(n)
prepare array: - O(n*log(n))
search array: O(n) O(log(n))
If, using a single array, we do O(log n) searches then, with the Linear Search total time is O(n * log n) which would break even with Binary Search. To put this in perspective, if we have an array with 1 million, or 220 entries, then, after sorting, we would need to do roughly 20 searches for Binary Search to break even with Linear Search.


© Robert M. Kline