How do you calculate time efficiency of an algorithm?

Computational complexity

Computational complexity or simply complexity of an algorithm is a measure of the amount of time and/or space required by an algorithm for an input of a given size. 

The analysis of algorithms is the process of finding the computational complexity of algorithms. 

Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. We are going to find out how to calculate complexity of algorithm and explain how to find time complexity of an algorithm using examples. 

Check out a related article:

Asymptotic Notations

Asymptotic notations are the mathematical notations used to describe the running time of an algorithm when the input tends towards a particular value or a limiting value. There are mainly three asymptotic notations: Theta notation, Omega notation and Big-O notation. These are used to determine the time complexity of algorithm.

Theta Notation (Θ-notation) - average case

Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average case complexity of an algorithm.

Omega Notation (Ω-notation) - best case

Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides the best case complexity of an algorithm. For any value of n, the minimum time required by the algorithm is given by Omega Ω(f(n)).

Big-O Notation (O-notation) - worst case

Big-O notation represents the upper bound of the running time of an algorithm. Thus, it gives the worst case complexity of an algorithm. It is widely used to analyze an algorithm as we are always interested in the worst case scenario. For any value of n, the running time of an algorithm does not cross time provided by O(f(n)).

Developers typically solve for the worst case scenario, Big-O, because you’re not expecting your algorithm to run in the best or even average cases when calculating time complexity of an algorithm. It allows you to make analytical statements such as, “well, in the worst case scenario, my algorithm will scale this quickly”.

Common running times

How do you calculate time efficiency of an algorithm?

Next chart shows the dependence of the execution time on the amount of incoming data for different types of running times:

Check out a related article:

How do you calculate time efficiency of an algorithm?

Below are short descriptions of some of the most common running times with examples of algorithms which are marked on the right side of the chart and sorted in order from most efficient to least efficient.

Constant O(1)

A constant-time algorithm is one that takes the same amount of time, regardless of its input. Examples:

  • Given two numbers, report the sum.
  • Given key-value hash table, return value for key.
  • Given a list of numbers, report the result of adding the first element to itself 1,000,000 times.

Logarithmic O(logn)

A logarithmic-time algorithm is one that requires a number of steps proportional to the log(n). In most cases, we use 2 as the base of the log, but it doesn't matter which base because we ignore constants. Because we use the base 2, we can rephrase this in the following way: every time the size of the input doubles, our algorithm performs one more step for computational complexity analysis.

Examples:

  • Binary search.
  • Searching a tree data structure.

Linear O(n)

An algorithm is said to take linear time, or O(n) time, if it's time complexity is O(n). Informally, this means that the running time increases at most linearly with the size of the input. More precisely, this means that there is a constant c such that the running time is at most cn for every input of size n. Example, 

  • A procedure that adds up all elements of a list requires time proportional to the length of the list, if the adding time is constant, or, at least, bounded by a constant.

Linearithmic O(nlogn)

Linearithmic algorithms are capable of good performance with very large data sets. Some examples of linearithmic algorithms are:

  • Heap sort
  • Merge sort
  • Quick sort

Quadratic O(n^2)

Quadratic Time Complexity represents an algorithm whose performance is directly proportional to the squared size of the input data set (think of Linear, but squared). Within our programs, this time complexity will occur whenever we nest over multiple iterations within the data sets.

Polynomial O(n^C)

An algorithm is said to be of polynomial time if its running time is upper bounded by a polynomial expression in the size of the input for the algorithm, i.e., f(n) = O(n^C) for some positive constant C.

Problems for which a deterministic polynomial time algorithm exists belong to the complexity class P, which is central in the field of computational complexity theory. Cobham's thesis states that polynomial time is a synonym for “tractable”, “feasible”, “efficient”, or “fast”.Examples:

  • The selection sort sorting algorithm on n integers performs An^2 operations for some constant A. Thus it runs in time O(n^2) and is a polynomial time algorithm.
  • All the basic arithmetic operations (addition, subtraction, multiplication, division, and comparison) can be done in polynomial time.
  • Maximum matchings in graphs can be found in polynomial time.

Exponential O(2^n)

Exponential (base 2) running time means that the calculations performed by an algorithm double every time as the input grows. Examples:

  • Power Set: finding all the subsets on a set.
  • Fibonacci.
  • Travelling salesman problem using dynamic programming.

Factorial O(n!)

An example of an algorithm that runs in factorial time is bogosort, a notoriously inefficient sorting algorithm based on trial and error.

Bogosort sorts a list of n items by repeatedly shuffling the list until it is found to be sorted. In the average case, each pass through the bogosort algorithm will examine one of the n! orderings of the n items. If the items are distinct, only one such ordering is sorted. Bogosort shares patrimony with the infinite monkey theorem.

Data structures operations complexity

How do you calculate time efficiency of an algorithm?

Operations with data structures, for example search, insert, delete, sort, have certain estimates in terms of complexity both in terms of execution time and memory.

It is very important to consider this when choosing a data structure for solving a particular problem. It is also important to keep in mind that the implementation of various object structures and data types and collections may differ for different programming languages.

It can be changed with the release of new versions of the language of programming and frameworks. It may also depend on the characteristics of the computer on which the execution is taking place (processor specifications, etc.) Below is a table on the execution time of basic operations with data structures.

How do you calculate time efficiency of an algorithm?

Conclusion

Estimating the algorithm complexity is an important part of algorithm design as it provides useful information about expected performance.

It is a common misconception that estimating the complexity of algorithms will become less important as a result of Moore's Law, which assumes an exponential increase in the power of modern computers. This is wrong, because this increase in power allows you to work with big data, which in turn increases the need for complexity measures in algorithms.

To give you an idea of how to efficiently calculate time complexity algorithm with examples, if someone wants to alphabetically sort a list of several hundred entries, such as a book bibliography, any algorithm should work in less than a second.

On one hand, for a list of a million entries (like phone numbers in a big city), elementary  algorithms requiring O (n^2) comparisons would have to do a trillion comparisons, which would take about three hours at a rate of 10 million comparisons per second.

On the other hand, quicksort and merge sort only require O(nlogn) comparisons (as average complexity for the former, as worst case for the latter). For n = 1.000.000, this gives approximately 30.000.000 comparisons, which takes only 3 seconds with 10 million comparisons per second, which is a more efficient way to compute complexity of algorithm.

Thus, the complexity estimate can allow many inefficient algorithms to be eliminated before any implementation. It can also be used to tune complex algorithms without testing all variations. By identifying the most costly stages of a complex algorithm, the study of complexity also allows you to focus on these stages' efforts to improve implementation efficiency. I hope this time complexity of algorithms tutorial clears some things out and gives you an idea on how to calculate time complexity for a given algorithm.

What is algorithm time efficiency?

Time efficiency - a measure of amount of time for an algorithm to execute. Space efficiency - a measure of the amount of memory needed for an algorithm to execute.

How do you calculate algorithm performance?

Time and space complexity are the two main measures for calculating algorithm efficiency, determining how many resources are needed on a machine to process it. Where time measures how long it takes to process the algorithm, space measures how much memory is used.