Theoretical runtime analysis of Big O notation
Rajiv Ranjan Singh / February 27, 2022
2 min read • ––– views
Big O notation denotes the order of growth i.e. how fast the running time grows concerning input.
Below codes are written in Go.
func example(n int) int {
var res int
res = 0
for i := 0; i < n; i++ {
res++
}
return res
}
The time complexity of the above code would be O(n)
because the number of iterations is varying with the value of n
. In the above case, the theoretical runtime should be the same no matter what the input is in fact there isn't even input.
Time complexity: $n\times O(1)=O(n)\times O(1)=O(n)$
Also, making a couple of changes will make this run in O(1)
. Assume that the problem constraints tell you that n <= 10^9
.
func example(c int) int {
var res int
res = 0
for i := 0; i <= 1000000000; i++ {
res++
if i >= c {
break
}
}
return res
}
For the above code every time you run the loop it will run c
times. In other words, the above for loop takes constant time. So, the time complexity will be constant O(1)
. The optimized loop is defined by constants so can be considered as a constant time operation. We can apply this anywhere, even for quadratic, cubic, and exponential solutions.
Time complexity: $c\times O(1)=O(1)\times O(1)=O(1)$
Basically, we are replacing all instances of n
or any other given variable with the maximum possible value of the variable provided in the problem statement. E.g. for a graph with N
nodes, always assume that the graph has the largest possible number of nodes (e.g. 2*10^5), and simply ignore the excess nodes. This simplifies our program from polynomial time to amortized constant time.