personal brand logoMatt McCarley
  • Front-end developer
  • UX designer
  • Wannabe CS nerd

For employment opportunities, potential collaboration, or just to connect—email me at
mdmccarley89@gmail.com

March 19, 2019

What is Big O Notation, and Why is it Useful?

Big O notation is something that can appear more confusing than it actually is. At its simplest, Big O notation is a way to represent the relative complexity of algorithms.

Similar Purposes

The notation is useful when comparing algorithms used for similar purposes. Seeing the Big O of multiplication algorithms would allow you to compare their relative complexity. It wouldn’t be useful to see the Big O of a multiplication algorithm and an addition algorithm side-by-side.

Most Complex Portion

Big O reduces an algorithm to the most significant portion of complexity. If an algorithm could be represented as n^2 + n, as the inputs became large, the + n would become meaningless in the overall complexity and could be dropped. The same holds true for n^3 + n^2 + n, as this would be simplified to n^3.

Worst-Case-Scenario

Big O also assumes the worst-case-scenario for an algorithm. Suppose an algorithm had to check each item in an input array and would return when it found what it was looking for. This could be represented as O(n), meaning that the number of operations would be equal to the number of inputs (items in the array). It is possible that the algorithm would return early, but in Big O, we assume that it would have to check each item in the array (the worst-case-scenario).

Machine and Instance Independent

These models are useful for comparison when we are talking about algorithms in a general way. We don’t have to actually run the algorithms with test inputs on a real machine. You can see how this would be problematic because we often don’t know the specifics of where the code will be run, but we still need to be able to talk about efficiency and complexity.

Not a Speed Comparison

Big O does not allow you to compare the speed of two algorithms. Rather, it is used to determine what factor the number of operations will grow by as the number of inputs increases.

Examples

O(1): Constant

When the number of operations is the same regardless of the size of the input.

A good example of this would be accessing the first item in an array.

const getFirst = items => items[0]; 

O(N): Linear

When the number of operations is proportional to the size of the input.

A good example would be having to check each item in an array.

const findItem = (items, itemToFind) => {
  items.forEach(item => {
    if (item === itemToFind) {
      return item;
    }
  });
  return 'did not find item';
};

O(N^2): Quadratic

When the number of operations is the square of the number of inputs.

A good example is any time where for every element, you are doing something with every other element. Any time you have nested for loops, you are dealing with quadratic complexity. One example would be building a square matrix.

const createSquareMatrix = items => {
  let length = items.length;
  let matrix = [];
  for (let i = 0, i < length; i++) { 
    matrix[i] = [];

    for (let j = 0, j < length; j++) {
      matrix[i].push(items[j]);
    }
  }
  return matrix;
};

O(2^N): Exponential

When the number of operations doubles with each increment of the input.

Often naive recursion problems where for each input, the algorithm will have to call itself to solve a smaller problem. Some recursive fibonacci algorithms are this way.

function fibonacci(num) {
  if (num <= 1) {
    return num;
  }
  return fibonacci(num - 2) + fibonacci(num - 1);
}

O(logN): Logarithmic

This is the most efficient searching algorithm. The number of operations peaks at the beginning and then flattens as the size of the input increases.

The most common example is a binary search tree. For a good explanation of a JavaScript binary search tree implementation, see this article.

O(N!): Factorial

The number of operations increases by a factorial with each increase of the input. This happens when you need to do something with all the possible permutations of a set.

The most common example is the Traveling Salesman Problem, which asks the following question: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?”

Within Reach

As you can see, at least a basic, functional understanding of Big O notation is within reach. In the simplest terms, Big O notation is a way to represent the relative complexity of algorithms.