Big-O Calculator (Time Complexity)

Instantly calculate big-o notation – or time complexity – to find your algorithm efficiency

Big-O Complexity Calculator
O(n)
Fair
Visits each element once. Common in simple searches and single-pass algorithms.
Estimated Operations (n = 100)
100
Relative Growth Comparison
Algorithm Examples — click to select

What Does This Big-O Time Complexity Calculator Do?

Big-O notation is a standardized way to describe how an algorithm’s runtime scales as the input size grows.

Rather than measuring exact execution time — which varies by hardware — it captures the growth rate of an algorithm in the worst case. When developers talk about time complexity, they’re almost always speaking in Big-O terms.

This Big-O calculator covers the eight complexity classes you’ll encounter in practice, from the ideal O(1) constant time all the way to the computationally catastrophic O(n!) factorial complexity.

Use it to quickly evaluate any algorithm or sanity-check your thinking before a technical interview.

How to Use It:

Select a complexity class from the dropdown, enter your input size n, and the calculator will show you the estimated operation count alongside a relative comparison of all classes. This makes it easy to see not just where your algorithm sits…

But how far it is from a more efficient alternative.

As a general rule, anything at O(n log n) or better is considered efficient for most real-world applications. O(n²) becomes problematic around n=10,000, and anything exponential or factorial is only practical for very small inputs — typically under 20–25 elements. Knowing these thresholds is exactly what interviewers test when they ask you to analyze an algorithm’s time complexity.

Big-O Notation vs. Big-Theta and Big-Omega

Big-O describes the upper bound of an algorithm’s growth — the worst it can ever perform for a given input size. Two related notations round out the picture:

  • Big-Omega (Ω) is the lower bound, describing the best-case floor
  • Big-Theta (Θ) is the tight bound, meaning the algorithm grows at exactly that rate in both best and worst cases

In everyday engineering and interview contexts, Big-O is the default. But the distinction matters — quicksort is Θ(n log n) on average yet O(n²) in the worst case, which is precisely why randomized pivot selection exists.

Worst Case, Average Case, and Best Case in Big-O

The same algorithm can produce different Big-O values depending on the input it receives.

  • Worst case is what Big-O captures by default: the maximum number of operations for any input of size n.
  • Best case is the most favorable scenario — insertion sort on an already-sorted array runs in O(n) rather than its worst-case O(n²).
  • Average case reflects expected performance across all typical inputs; for quicksort this is O(n log n), and for a linear search it’s O(n/2), which simplifies to O(n) under Big-O rules.

Unless an interviewer specifies otherwise, assume they’re asking about worst case.

Space Complexity in Big-O Analysis

Big-O notation applies to memory consumption just as it does to operation counts. An algorithm that stores n elements in a hash map is O(n) space; one that operates on the input in-place without extra allocations is O(1) auxiliary space.

Merge sort runs in O(n log n) but requires O(n) auxiliary space for its temporary arrays.

Heapsort matches that Big-O class while achieving O(1) auxiliary space.

Memoization is the classic trade-off — you pay O(n) space to collapse an exponential-time recursion down to O(n). When using this calculator, consider whether the Big-O class you’re evaluating applies to runtime operations, memory, or both. Same for time complexity.

Amortized Big-O

Some operations appear expensive in isolation…

But are cheap when averaged across a long sequence of calls — this is amortized Big-O.

The textbook example is a dynamic array’s push operation: most appends cost O(1), but occasionally the underlying array must double in capacity, triggering an O(n) copy. Amortized across all operations, the per-append cost is still O(1). Hash table insertions, resizable stack operations, and binary heap sifts all depend on amortized reasoning to justify their listed Big-O class.

When you see O(1) cited for these operations, it reflects amortized cost — not a guarantee that every individual call is constant.

How to Simplify a Big-O Expression

Raw operation counts need to be simplified before arriving at a final Big-O class. There are two core rules.

  1. First, drop constants: O(3n) and O(500) simplify to O(n) and O(1) because constants don’t affect the growth rate as n approaches infinity.
  2. Second, drop lower-order terms: in O(n² + n), the n term becomes negligible compared to n² at large input sizes, so it simplifies to O(n²).

Applying both rules is how you convert a raw expression into the clean notation this calculator uses — and why O(n² + n log n + 5) and O(n²) are treated as identical.

Rule 1 — Drop Constants
O(3n)
O(n)
Constants don't affect growth rate as n → ∞
Rule 2 — Drop Lower-Order Terms
O(n² + n)
O(n²)
Smaller terms become negligible at large n

How to Determine the Big-O of Your Code

The fastest way to read Big-O from code is to count loop nesting and recognize recursive patterns.

A single loop over n elements is O(n). Two nested loops over n are O(n²). Three nested loops are O(n³). A loop that halves its working range each iteration — like binary search — is O(log n).

A loop that runs n times and calls a helper that also runs n times is O(n²) regardless of whether those loops are written together or in separate functions.

For recursive algorithms, the Master Theorem formalizes this: a recurrence of the form T(n) = aT(n/b) + f(n) resolves based on how f(n) compares to n^(log_b a). Merge sort’s recurrence T(n) = 2T(n/2) + O(n) resolves to Θ(n log n) — a result you can verify directly in this Big-O calculator by comparing O(n) and O(n log n) at the same input size.

Big-O of Common Algorithms and Data Structures

Knowing the Big-O classes in the abstract is only half the picture.

You also need to know where real algorithms land.

For sorting: bubble sort and insertion sort sit at O(n²) worst case; merge sort and heapsort are O(n log n) in all cases; quicksort is O(n log n) average but O(n²) worst case. For searching: linear search is O(n); binary search on a sorted array is O(log n). For data structure operations: hash table lookup, insert, and delete average O(1) but degrade to O(n) under heavy collision; balanced BST operations (AVL, Red-Black trees) sit at O(log n); an unbalanced BST degrades to O(n) worst case. For graph algorithms: BFS and DFS run in O(V + E) where V is vertices and E is edges; Dijkstra’s algorithm runs in O((V + E) log V) with a min-heap. Use this time complexity calculator to compare any two of these classes at your expected input size to see how much the algorithm choice matters at scale.

Using This Big-O Calculator for Interview Prep

Big-O analysis is one of the most consistently tested skills in software engineering interviews at every level.

Interviewers expect you to state the Big-O of your solution — both for operations and memory — without being prompted, and to identify whether a better Big-O class is achievable for the problem.

Common patterns to recognize on sight: any divide-and-conquer approach that halves the input is O(log n) or O(n log n); any solution that considers all pairs is O(n²); any solution that explores all subsets is O(2ⁿ). Using this Big-O calculator to build intuition for how these classes diverge at realistic input sizes — n = 100, n = 10,000, n = 1,000,000 — directly sharpens the pattern recognition interviewers are testing for.

Frequently Asked Questions About Big-O and Time Complexity

What is O(1) in Big-O notation?

O(1) means the algorithm takes a constant number of operations regardless of input size.

Accessing an array by index, inserting into a hash table (amortized), and checking a pre-stored length value are all O(1). The actual operation count might be 1 or it might be 500 — what makes it O(1) is that it doesn’t grow as n grows.

Is O(log n) faster than O(n)?

Yes, O(log n) grows far slower than O(n). At n = 1,000,000, O(log n) requires roughly 20 operations while O(n) requires 1,000,000. This is why binary search on a sorted array is dramatically faster than a linear scan, and why balanced tree and heap operations are preferred over their linear alternatives at scale.

What does O(n log n) mean?

O(n log n) — sometimes called linearithmic — means the algorithm does O(log n) work for each of n elements, or equivalently makes O(log n) passes over the full dataset. It’s the Big-O class of the most efficient general-purpose comparison-based sorting algorithms, including merge sort, heapsort, and average-case quicksort.

Why is O(n²) considered slow?

At small input sizes O(n²) is perfectly acceptable — sorting 100 elements with insertion sort is imperceptible. But quadratic growth means doubling n quadruples the work. At n = 10,000 you’re doing 100 million operations; at n = 100,000 you’re at 10 billion. Most applications hit practical limits in the n = 1,000–10,000 range, which is why O(n²) algorithms are regularly replaced with O(n log n) alternatives in production.

What is the difference between Big-O notation and actual runtime?

Big-O time complexity describes how an algorithm’s operation count scales with input size — it’s a mathematical abstraction that ignores hardware, language, and constants.

Actual runtime is the wall-clock time a specific implementation takes on specific hardware.

Two O(n log n) algorithms can differ in real-world runtime by 10x due to cache behavior, memory allocation, or language overhead. Big-O tells you about scalability; profiling tells you about real-world performance.

This calculator focuses on Big-O — use it to compare scaling behavior, not to predict exact execution times.

Scroll to Top