MuTasim

Time Complexity.

Time complexity is a fundamental concept in computer science that helps us understand how an algorithm's performance scales with input size. It's expressed using Big O notation, which describes the upper bound of an algorithm's growth rate.

Understanding these complexity classes allows developers to make informed decisions about which algorithms to use for different scenarios, optimizing for speed or memory usage depending on requirements.

Time Complexity

Measures how the running time of an algorithm increases with the size of the input. This helps us predict how scalable our solution is for large datasets.

Space Complexity

Measures how much memory an algorithm needs relative to the input size. Critical for applications with memory constraints or dealing with very large datasets.

Time Complexity Growth Comparison

Constant - O(1)
Logarithmic - O(log n)
Linear - O(n)
Log-Linear - O(n log n)
Quadratic - O(n²)

Understanding Complexity Classes

Understanding time complexity helps you choose the right algorithm for your specific needs. Here's a breakdown of common complexity classes and what they mean for your code's performance:

O(1) - Constant Time

Operations that take the same amount of time regardless of input size.

COMMON EXAMPLES:

  • Array access with index
  • Push/pop operations on a stack
  • Inserting at the beginning of a linked list
Real world impact: Hash table lookups are typically O(1), making them ideal for frequent access operations.

O(log n) - Logarithmic Time

Runtime grows logarithmically with input size. Common in algorithms that divide the problem in half each time.

COMMON EXAMPLES:

  • Binary search
  • Finding an element in a balanced search tree
  • Certain divide and conquer algorithms
Real world impact: Database indexes often use B-trees with O(log n) lookup time, enabling fast searches in large datasets.

O(n) - Linear Time

Runtime grows linearly with input size. Operations performed once for each input element.

COMMON EXAMPLES:

  • Linear search
  • Traversing an array or linked list
  • Finding the maximum/minimum value in an unsorted array
Real world impact: Processing a stream of incoming data often requires at least O(n) time as each element must be examined.

O(n log n) - Linearithmic Time

Common in efficient sorting algorithms and operations that combine linear and logarithmic complexities.

COMMON EXAMPLES:

  • Merge sort
  • Heap sort
  • Quick sort (average case)
Real world impact: Most database and spreadsheet 'sort' operations use algorithms with this complexity.

O(n²) - Quadratic Time

Runtime grows with the square of input size. Often involves nested iterations over the data.

COMMON EXAMPLES:

  • Bubble sort
  • Selection sort
  • Simple matrix multiplication
  • Checking all possible pairs in an array
Real world impact: n² algorithms become very slow with large inputs. A million-item input would require a trillion operations.

O(n!) - Factorial Time

Runtime grows with the factorial of input size. These algorithms become impractical very quickly.

COMMON EXAMPLES:

  • Brute force traveling salesman problem
  • Generating all permutations
  • Solving the n-queens problem with brute force
Real world impact: Even with just 20 items, a factorial algorithm would take trillions of years on modern hardware.

Algorithm Complexities

AlgorithmCategoryBest CaseAverage CaseWorst CaseSpace
Bubble SortSortingO(n)O(n²)O(n²)O(1)
Selection SortSortingO(n²)O(n²)O(n²)O(1)
Insertion SortSortingO(n)O(n²)O(n²)O(1)
Merge SortSortingO(n log n)O(n log n)O(n log n)O(n)
Quick SortSortingO(n log n)O(n log n)O(n²)O(log n)
Heap SortSortingO(n log n)O(n log n)O(n log n)O(1)
Counting SortSortingO(n+k)O(n+k)O(n+k)O(n+k)
Radix SortSortingO(nk)O(nk)O(nk)O(n+k)
Linear SearchSearchingO(1)O(n)O(n)O(1)
Binary SearchSearchingO(1)O(log n)O(log n)O(1)
Jump SearchSearchingO(1)O(√n)O(√n)O(1)
Interpolation SearchSearchingO(1)O(log log n)O(n)O(1)
Breadth-First SearchGraphO(V+E)O(V+E)O(V+E)O(V)
Depth-First SearchGraphO(V+E)O(V+E)O(V+E)O(V)
Dijkstra's AlgorithmGraphO((V+E)log V)O((V+E)log V)O((V+E)log V)O(V)
Bellman-FordGraphO(VE)O(VE)O(VE)O(V)
Floyd-WarshallGraphO(V³)O(V³)O(V³)O(V²)
Prim's AlgorithmGraphO(E log V)O(E log V)O(E log V)O(V+E)
Kruskal's AlgorithmGraphO(E log E)O(E log E)O(E log E)O(V+E)
Array AccessData StructureO(1)O(1)O(1)O(1)
Array SearchData StructureO(1)O(n)O(n)O(1)
Array InsertionData StructureO(1)O(n)O(n)O(1)
Hash Table SearchData StructureO(1)O(1)O(n)O(n)
Binary Search Tree SearchData StructureO(1)O(log n)O(n)O(n)
AVL Tree OperationsData StructureO(log n)O(log n)O(log n)O(n)
V = vertices, E = edges, k = range of values