Big-O
In computer science, Big-O notation is a way to describe the performance or complexity of an algorithm. It provides an upper bound on the growth rate of the time (or space) complexity of an algorithm as a function of the input size.
The 'O' in Big-O stands for 'order of' and it describes the worst-case scenario. When we say an algorithm is O(f(n)), where f(n) is a function of the input size n, we are expressing the upper limit of the algorithm's growth rate
Here are some common Big-O notations and their meanings:
- O(1): Constant time complexity. The algorithm's runtime or space usage is constant, regardless of the input size. Accessing an element in an array by index, for example, is typically O(1).
- O(log n): Logarithmic time complexity. Common in algorithms that divide the problem in half at each step, like binary search. Efficient algorithms for balanced search trees also often have O(log n) complexity.
- O(n): Linear time complexity. The algorithm's performance grows linearly with the size of the input. Examples include iterating through an array or list.
- O(n log n): Linearithmic time complexity. Common in efficient sorting algorithms like merge sort and heap sort.
- O(n^2): Quadratic time complexity. Performance is proportional to the square of the size of the input. Often seen in nested loops.
- O(n^2): Quadratic time complexity. Performance is proportional to the square of the size of the input. Often seen in nested loops.
- O(n!): Factorial time complexity. Represents algorithms that solve a problem of size n by solving n smaller problems, each of size n−1, and so on. Extremely inefficient and should be avoided for large inputs.
The image below shows a graphical comparison of time between different values of O.

The image below shows a code comparison between the different values of O.
