Time complexity can be identified based on the input size of a problem with respect to the time required to solve that problem. In simple, total time required by the algorithm to process the given input. Constraints will give you basic idea about the size of input.
Time Complexity hierarchy:
O(1) is less time.
O(n!) is Maximum time
O(1) < O(log n) < O(sqrt(n)) < O(n) < O(n log n) < O(n2) < O(n3) < O(2n) < O(n!)
O(1) – Constant Time Complexity.
It does not matter, how many input elements a problem have, it takes constant number of steps to solve the problem.
Examples for Constant time complexity:
a[1000] contains 1000 elements, it takes same time to access the 10th element and 999th element. So, it is constant time complexity.
a++; // Constant complexity to calculate this statement.
C=a+b; // Constant complexity to calculate this statement.
Push and Pop operations in stack
Insert and Remove from Queue
Example C++ Code for O(1) complexity
Example Java Code for O(1) complexity
O(log n) – Logarithmic Time complexity
In every step, halves the input size in logarithmic algorithm, log2 n is equals to the number of times n must be divided by 2 to get 1.
Let us take an array with 16 elements input size, that is - log2 16
step 1: 16/2 = 8 will become input size
step 2: 8/2 = 4 will become input size
step 3: 4/2 =2 will become input size
step 4: 2/2 =1 calculation completed.
The calculation will be performed till we get a value 1.
Still not understood? see the example.
Example algorithms for Logarithmic Complexity:
Binary Search Algorithm contains 16 sorted elements, [1, 2, 3 ,4 , 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], we need to find key value 16.
Step 1: 16/2 = 8, we need 16, it is right array, so leave all the left side half. that is, [9, 10, 11, 12, 13, 14, 15, 16]
Step 2: 8/2 = 4, we need 16, it is right array, so leave all the left side half again. that is, [13, 14, 15, 16]
Step 3: 4/2 = 2, we need 16, it is right array, so leave all the left side half again. that is, [15, 16]
Step 4: 2/2 =1, we found the value 16, return and exit.
So, instead of searching element by element sequentially, we found the required value in 4 steps. If we do sequential search, it takes 16 steps to find the last element. Total elements are 16, we can complete search in 4 steps. 24 = 16
, that is log2 16
.
O(n) = Linear Complexity
The number of steps and time required to solve a problem is based on input size. If you want to print a statement for n number of times, it is a linear complexity. Linear Search is an example for Linear Time Complexity.
C++ Code for Linear Complexity
Example Java Code for Linear Complexity
if n value is 10, the loop repeats 10 times.
If n value is 1000, the loop takes 1000 times.
still there are other cases:
The following loop executes 2n times, still it is O(n) complexity.
The following loop executes n+5 times, still it is O(n) complexity.
The following loop executes n/2 times, still it is O(n) complexity.
O(n log n) - (n * log n) complexity
Divide and Conquer algorithms are generally considered under n log n time complexity.
Ex:
Merge Sort Algorithm
Heap Sort Algorithm
Quick Sort Algorithm (Best and Average Case)
O(n^2) – Quadratic time complexity
If we use nested loop, that means a loop with in an another loop, is a quadratic complexity.
Outer loop runs n
number of times, inner loop runs n*n
number of times, that is O(n2).
See other cases:
The below code contains n, n2 and n
running times. But still it is O(n2)
complexity, because that is maximum.
Ex:
Bubble Sort Algorithm
Selection Sort Algorithm
Insertion Sort Algorithm
Quick Sort Worst Case complexity is O(n^2)
In some cases, the time complexity is based on multiple variables, the following code contains two different variables n
and m
.
So the complexity is O(nm)
.
O(n^3) – A Cubic complexity
In quadratic algorithm, we used two nested loops. In cubic algorithm we use three nested loops.
O(1), O(log n), O(n), O(n log n), O(n2), O(n3) all the complexities are polynomials.
O(2^n) – Subset of input elements.
The algorithm must run through all possible subsets of given input.
Ex:
{A, B, C} – possibilities of subsets are 2^3
{ }, {A}, {B}, {C}, {A,B}, {A, C}, {B,C}, {A,B,C}
{1,2,3,4} – Possibilities of subsets are 2^4
{ }, {1}, {2},{3},{4},{1,2},{1,3},{1,4},{2,3},{2,4},{3,4},{1,2,3},{1,2,4},{1,3,4},{2,3,4},{1,2,3,4}
O(n!) – Factorial complexity
All permutations of input elements.
If given input is {1,2,3}
The permutations are: (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2), (3,2,1)
In online coding websites like Codechef, Hackerrank and Codeforces, and on any other competitions, the maximum execution time is 1 Second.
Within that short duration, if input elements are less than 10, we can choose non-polynomial algorithms.
If Input size is a lot bigger, we don’t have any other choice except to solve the problem using O(1) or O(n log n) or O(n).
If you use any other algorithm with O(n2)
or more, you will face Time Limit Exceeded
.
The following table shows the basic idea about the number of elements and associated time complexity. You have a big input size with 5000 elements, you are writing your code with O(n3) complexity. It is of no use, you will get time limit exceeded error even though your logic is correct.
Input Size (n) | Time Complexity |
If less than 11 input elements | O(n!) |
If less than 24 input elements | O(2n) |
If less than 300 input elements | O(n3) |
If less than 5000 input elements | O(n2) |
If less than 106 input elements | O(n log n) |
If less than 107 input elements |
O(n) |
If less than 210^7 input elements | O(log n) |
Better to find optimized algorithms for the same problems. Definitely we have various solutions for same problem.