Growth of Functions

The growth of a function is determined by the highest order term: if you add a bunch of terms, the function grows about as fast as the largest term (for large enough input values).

For example, $f(x) = x^2+1$ grows as fast as $g(x) = x^2 + 2$ and $h(x) = x^2+x+1$, because for large $x$, $x^2$ is much bigger than $1$, $2$, or $x+1$.

Similarly, constant multiples don't matter that much: $f(x)=x^2$ grows as fast as $g(x)=2x^2$ and $h(x)=100x^2$, because for large $x$, multiplying $x^2$ by a constant does not change it "too much" (at least not as much as increasing $x$).

Essentially, we are concerned with the shape of the curve:

Examples of three lines

All three of these functions are lines; their exact slope/y-intercept does not matter.

Only caring about the highest order term (without constant multiples) corresponds to ignoring differences in hardware/operating system/etc. If the CPU is twice as fast, for example, the algorithm still behaves the same way, even if it executes faster.

Big-Oh Notation

Let $f$ and $g$ be functions from $\mathbb{Z} \rightarrow \mathbb{R}$ or $\mathbb{R} \rightarrow \mathbb{R}$. We say $f(x)$ is $O(g(x))$ if there are constants $c \gt 0$ and $k \gt 0$ such that $0 \le f(n) \le c \times g(n)$ for all $x \ge k$. The constants $c$ and $k$ are called witnesses. We read $f(x)$ is $O(g(x))$ as "$f(x)$ is big-Oh of $g(x)$". We write $f(x) \in O(g(x))$ or $f(x) = O(g(x))$ (though the former is more technically correct).

Basically, $f(x)$ is $O(g(x))$ means that, after a certain value of $x$, $f$ is always smaller than some constant multiple of $g$:

Explanation of the definition of big-Oh

Here are some examples that use big-Oh notation:

Typically, we want the function inside the Oh to be as small and simple as possible. Even though it is true, for example, that $5x^2+10x+3$ is $O(x^3)$, this is not terribly informative. Similarly, $5x^2+10x+3$ is $O(2x^2 + 11x + 3)$, but this is not particularly useful.

Here are some important big-Oh results:

When using logarithms inside big-Oh notation, the base does not matter. Recall the change-of-base formula: $\log_b n = \frac{\log n}{\log b}$. Therefore, as long as the base $b$ is a constant, it differs from $\log n$ by a constant factor.

Here are some common functions, listed from slowest to fastest growth: $$ O(1), O(\log n), O(n), O(n \log n), O(n^2), O(2^n), O(n!) $$ Caution: there are infinitely many functions between each element of this list!

Big-Omega Notation

As we saw above, big-Oh provides an upper bound for a function. To specify a lower bound, we use big-Omega notation.

Let $f$ and $g$ be functions from $\mathbb{Z} \rightarrow \mathbb{R}$ or $\mathbb{R} \rightarrow \mathbb{R}$. We say $f(x)$ is $\Omega(g(x))$ if there are constants $c \gt 0$ and $k \gt 0$ such that $0 \le c \times g(n) \le f(n)$ for all $x \ge k$. The constants $c$ and $k$ are called witnesses. We read $f(x)$ is $\Omega(g(x))$ as "$f(x)$ is big-Oh of $g(x)$". We write $f(x) \in \Omega(g(x))$ or $f(x) = \Omega(g(x))$ (though the former is more technically correct).

Basically, $f(x)$ is $\Omega(g(x))$ means that, after a certain value of $x$, $f$ is always bigger than some constant multiple of $g$:

Explanation of the definition of big-Omega

Here are some examples that use big-Omega notation:

Big-Theta Notation

In the previous example, we showed that $\sum_{i=1}^n i = \Omega(n^2)$. Earlier, we also showed that this sum is $O(n^2)$. We have special notation for such situations:

Let $f$ and $g$ be functions from $\mathbb{Z} \rightarrow \mathbb{R}$ or $\mathbb{R} \rightarrow \mathbb{R}$. We say $f(x)$ is $\Theta(g(x))$ if $f(x)$ is $O(g(x))$ and $f(x)$ is $\Omega(g(x))$. We read $f(x)$ is $\Theta(g(x))$ as "$f(x)$ is big-Theta of $g(x)$". We write $f(x) \in \Theta(g(x))$ or $f(x) = \Theta(g(x))$ (though the former is more technically correct).

It might be helpful to think of big-Oh/Omega/Theta as follows: