According to Wikipedia, the computational complexity, or simply the complexity of an algorithm is the number of resources required for running it. The amount of required resources varies based on the input size, so the complexity is generally expressed as a function of (n), where (n) is the size of the input.
When analyzing an algorithm, we can consider the nature of the resource in two distinct types, time and space. If one talks of the size of the memory space needed in solving a problem, that connotates space complexity. On the other hand, when the required resource is the time needed for running the algorithm, then that is time complexity, which is the main focus of this blog post.
Before we start
We explore time complexity, what Big-O notation is, and why we need to understand how it applies to our day-to-day software development lives.
Due to its popularity, we use Javascript to make this concept easier to understand. With that said, knowledge of Javascript is not a prerequisite.
Time Complexity
There are three cases in analyzing the time complexity of an algorithm: best-case
, average-case
, and worst-case
. Let's say we have an unsorted list [4, 9, 2, 6, 3, 5, 8, 1, 7]
and using linear search we need to find the index of a value.
best-case
: this is the complexity of solving the problem for the best input. In our example, the best case is to search for the value4
, since this is the first value of the list and it is found in the first iteration.average-case
: this complexity is defined with respect to the distribution of the values in the input data. This example may not be the best, but we could say that the average-case would be when we’re searching for some value in the “middle” of the list like the value3
.worst-case
: this is the complexity of solving the problem for the worst input of size (n). In our example, the worst-case is to search for the value7
, which is the last element from the list.
Usually, when describing the time complexity of an algorithm, we are talking about the worst-case
. So how do we describe the time complexity of an algorithm? We use a mathematical notation called Big-O
.
The Big-O Notation
A mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others. Collectively, it is called Bachmann–Landau notation or asymptotic notation.
In computer science, Big-O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. This notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
There’s a lot of math involved in the formal definition of the notation, but informally we can assume that the Big-O notation gives us the algorithm’s approximate run time in the worst case. When using the Big-O notation, we describe the algorithm’s efficiency based on the increasing size of the input data (n). For example, if the input is a string, the (n) will be the length of the string. If it is a list, the (n) will be the length of the list and so on.
Time Complexities
We will only be focusing on 4 of the most common time complexities expressed using the Big-O notation:
Name | Time Complexity |
---|---|
Constant Time | O(1) |
Logarithmic Time | O(log n) |
Exponential Time | O(2^n) |
Factorial Time | O(n!) |
There are some other time complexities out there which you can check out later.
Constant Time: O(1)
An algorithm is said to be constant time if it is bounded by a value that does not depend on the size of the input. No matter the size of the input data, the running time will always be the same.
const getFirstValue = list => {
return list[0]
}
let sample = ['big', 'fat', 'steak']
console.log(getFirstValue(list))
// -> 'big'
As you notice, the function getFirstValue
always returns the first value of the list. An algorithm with constant time complexity is excellent since we don’t need to worry about the input size.
Logarithmic Time: O(log n)
Logarithmic time complexity in an algorithm is when it reduces the size of the input data in each step. Algorithms with logarithmic time complexity are commonly found in operations on binary trees or when using binary search. For example, using a sorted list, we need to check if an element exists through binary search.
const binarySearch = (list, x) => {
let start = 0,
end = list.length - 1
while (start <= end) {
let mid = Math.floor((start + end) / 2)
if (list[mid] === x) {
return true
} else if (list[mid] < x) {
start = mid + 1
} else {
end = mid - 1
}
}
return false
}
let sample = [1, 2, 3, 4, 5, 6, 7, 8, 9]
console.log(binarySearch(sample, 3))
// -> true
It is important to understand that an algorithm that must access all elements of its input data cannot take logarithmic time, as the time taken for reading input of size (n) is of the order of (n).
Exponential Time: O(2^n)
When the growth doubles with each addition to the input data set, the algorithm has an exponential time complexity. This kind of time complexity is usually seen in brute-force algorithms. A good example of an exponential time algorithm is the recursive calculation of Fibonacci numbers:
const fibonacci = n => {
if (n <= 1) {
return n
}
return fibonacci(n - 1) + fibonacci(n - 2)
}
console.log(fibonacci(6))
// -> 8
As you may have noticed, the time complexity of recursive functions is a little harder to define since it depends on how many times the function is called and the time complexity of a single function call.
Factorial Time: O(n!)
An algorithm is said to have a factorial time complexity when it grows in a factorial way based on the size of the input data, for example:
2! = 2 x 1 = 2
3! = 3 x 2 x 1 = 6
4! = 4 x 3 x 2 x 1 = 24
5! = 5 x 4 x 3 x 2 x 1 = 120
6! = 6 x 5 x 4 x 3 x 2 x 1 = 720
7! = 7 x 6 x 5 x 4 x 3 x 2 x 1 = 5040
8! = 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 = 40320
It grows very fast, even for a small size input. Another example of an algorithm which has a factorial time complexity is the Heap’s algorithm
, which is used for generating all possible permutations of (n) objects:
const numPermutation = num => {
let arr = (num + '').split(''),
permutations = []
const swap = (a, b) => {
let tmp = arr[a]
arr[a] = arr[b]
arr[b] = tmp
}
const generate = n => {
if (n == 1) {
permutations.push(arr.join())
} else {
for (var i = 0; i != n; ++i) {
generate(n - 1)
swap(n % 2 ? 0 : i, n - 1)
}
}
}
generate(arr.length)
return permutations
}
console.log(numPermutation(123))
// ->
// [1, 2, 3]
// [2, 1, 3]
// [3, 1, 2]
// [1, 3, 2]
// [2, 3, 1]
// [3, 2, 1]
Note that it grows in a factorial way, based on the size of the input data, so we can say the algorithm has factorial time complexity O(n!)
.
It is important to note that when analyzing the time complexity of an algorithm with several operations we need to describe the algorithm based on the largest complexity among all operations.
General Thoughts
Modern languages, like Javascript, already provide built-in functions for various algorithms like sorting. There will come a day that you will probably need to implement an algorithm to perform some kind of operation in a certain amount of data. By understanding and learning time complexities, you will come to know the importance of efficiency and be able to find bottlenecks in your code which should be improved, mainly when working with huge data sets.