Loading [MathJax]/jax/output/HTML-CSS/jax.js

Growth of Functions

The growth of a function is determined by the highest order term: if you add a bunch of terms, the function grows about as fast as the largest term (for large enough input values).

For example, f(x)=x2+1 grows as fast as g(x)=x2+2 and h(x)=x2+x+1, because for large x, x2 is much bigger than 1, 2, or x+1.

Similarly, constant multiples don't matter that much: f(x)=x2 grows as fast as g(x)=2x2 and h(x)=100x2, because for large x, multiplying x2 by a constant does not change it "too much" (at least not as much as increasing x).

Essentially, we are concerned with the shape of the curve:

Examples of three lines

All three of these functions are lines; their exact slope/y-intercept does not matter.

Only caring about the highest order term (without constant multiples) corresponds to ignoring differences in hardware/operating system/etc. If the CPU is twice as fast, for example, the algorithm still behaves the same way, even if it executes faster.

Big-Oh Notation

Let f and g be functions from ZR or RR. We say f(x) is O(g(x)) if there are constants c>0 and k>0 such that 0f(n)c×g(n) for all xk. The constants c and k are called witnesses. We read f(x) is O(g(x)) as "f(x) is big-Oh of g(x)". We write f(x)O(g(x)) or f(x)=O(g(x)) (though the former is more technically correct).

Basically, f(x) is O(g(x)) means that, after a certain value of x, f is always smaller than some constant multiple of g:

Explanation of the definition of big-Oh

Here are some examples that use big-Oh notation:

Typically, we want the function inside the Oh to be as small and simple as possible. Even though it is true, for example, that 5x2+10x+3 is O(x3), this is not terribly informative. Similarly, 5x2+10x+3 is O(2x2+11x+3), but this is not particularly useful.

Here are some important big-Oh results:

When using logarithms inside big-Oh notation, the base does not matter. Recall the change-of-base formula: logbn=lognlogb. Therefore, as long as the base b is a constant, it differs from logn by a constant factor.

Here are some common functions, listed from slowest to fastest growth: O(1),O(logn),O(n),O(nlogn),O(n2),O(2n),O(n!) Caution: there are infinitely many functions between each element of this list!

Big-Omega Notation

As we saw above, big-Oh provides an upper bound for a function. To specify a lower bound, we use big-Omega notation.

Let f and g be functions from ZR or RR. We say f(x) is Ω(g(x)) if there are constants c>0 and k>0 such that 0c×g(n)f(n) for all xk. The constants c and k are called witnesses. We read f(x) is Ω(g(x)) as "f(x) is big-Oh of g(x)". We write f(x)Ω(g(x)) or f(x)=Ω(g(x)) (though the former is more technically correct).

Basically, f(x) is Ω(g(x)) means that, after a certain value of x, f is always bigger than some constant multiple of g:

Explanation of the definition of big-Omega

Here are some examples that use big-Omega notation:

Big-Theta Notation

In the previous example, we showed that ni=1i=Ω(n2). Earlier, we also showed that this sum is O(n2). We have special notation for such situations:

Let f and g be functions from ZR or RR. We say f(x) is Θ(g(x)) if f(x) is O(g(x)) and f(x) is Ω(g(x)). We read f(x) is Θ(g(x)) as "f(x) is big-Theta of g(x)". We write f(x)Θ(g(x)) or f(x)=Θ(g(x)) (though the former is more technically correct).

It might be helpful to think of big-Oh/Omega/Theta as follows: