Time and Space Complexity In Algorithms, Beginners Guide

Akash Jain
3 min readJan 2, 2021

--

A solution to a problem is also called an algorithm in programming. There is also a textbook definition for an algorithm you can read it here. But informally algorithm is just a way to solve a problem. There can be multiple algorithms to solve one problem. But are they efficient enough has to be calculated before used in the program or problem you are trying to solve, then how do we measure which algorithm is best out of many available?.

Algorithm efficiency generally calculated using these two measures one is Time Complexity and the other Space Complexity. about this both, we’ll see in the following section in their dedicated section. But before we should talk about notations which makes it easy for us to calculate complexity’s.

Multiple algorithms with different implementation to solve single problem.

Asymptotic Notations

Asymptotic notation is a way to calculate the algorithm’s behavior for time complexity in its three input types that are best, worst, and average. What do I mean by this, Well the algorithm behaves Differently when the given input is engineered. forex. In bubble sort, sorted data will give the best time complexity than its counterpart where the data is sorted but is in reverse order.

Big Oh(O) with different time complexity.

The algorithms behave differently according to the input given. Hence we generally test an algorithm in the worst case. Because we are not interested in seeing how an algorithm performs in its idle state. For each case that is the best, average, and the worst, Symbolic representations are followed as a standard. For the best case, it’s Omega(Ω), the average case its theta(Θ), and the worst-case it’s Big Oh(O). Out of which Big-Oh(O) is what we talk about mostly because it represents the worst case for an algorithm. which helps us test an algorithm for worst scenarios in real world (which happens regularly with a software product).

Space and Time Complexity

Time complexity is a measure in the unit of how much time it will take for an algorithm to solve a certain problem given an n number of inputs. Here time is not the normal clock time that we are used to, but it’s a unit for the execution of 1 instruction. that is how many instructions the CPU must execute to solve a problem. Well, the lessor time is taken by the algorithm is better because it is efficient and cheap in terms of CPU cycles. If an algorithm takes around n unit time to process n inputs in the worst case. Then the time complexity for that algorithm is O(n). Diagram in the above section shows different time complexities which algorithm can have.

On the other hand, Space complexity is a measure of how much space our algorithm will take in computer memory. Memory is required for an algorithm to store a current state of a program and also keep track of changes that it is doing.

Example of Time and Space Complexity

public int linearSearch(String[] arr, String element) {
for (int i = 0; i < arr.length; i++) {
if(arr[i].equals(element)) {
return i;
}
}
return -1;
}

In above case linearSearch method has Time Complexity of O(n) and Space Complexity of O(1). because in worst case our for loop can go till the end of array which is of size n(input size). and space is O(1) because we are storing only the ‘i’ variable which is single and can be counted(O(1) means the number of variables are quantifiable and do not depends on the input given).

Conclusion

In the end space and time complexity is a topic of research in itself and can be learned for years. hence the aim of this blog is only to give its reader basic understanding of this topic.

--

--