The idea behind big O notation

Big O notation is the language we use for talking about how long an algorithm takes to run. It's how we compare the efficiency of different approaches to a problem.

With big O notation we express the runtime in terms of—brace yourself—how quickly it grows relative to the input, as the input gets arbitrarily large.

Let's break that down:

Rule Book:

  1. Worst Case. When we calculate a big O we always think of a worst case.
  2. Remove Constants
  3. Different Terms for Inputs
  4. Drop non Dominant Terms

Remove Constants

Ignore variable assignment & small calculation.

function printFirstItemThenFirstHalfThenSayHi100Times(items) {
    console.log(items[0]);

    var middleIndex = Math.floor(items.length / 2);
    var index = 0;

    while (index < middleIndex) {
        console.log(items[index]);
        index++;
    }

    for (var i = 0; i < 100; i++) {
        console.log('hi');
    }
}

// O(1 + n/2 + 100) => O(n)

<aside> ⚠️ Big O ignores constants, but sometimes the constants matter. If we have a script that takes 5 hours to run, an optimization that divides the runtime by 5 might not affect big O, but it still saves you 4 hours of waiting.

</aside>

Different Terms for Inputs

Different inputs should have different variables. O(n+m). n and m arrays nested would be O(n*m) + for steps in order * for nested steps

function compressBoxesTwice(boxes) {
	boxes.forEach(function(boxes) {
		console.log(boxes);
	}

	boxes.forEach(function(boxes) {
		console.log(boxes);
	}
}

// O(2n) => O(n)
function compressBoxes(boxes, boxes2) {
	boxes.forEach(function(boxes) {
		console.log(boxes);
	}

	boxes2.forEach(function(boxes) {
		console.log(boxes);
	}
}

// O(n+m)