Complexity

Posted on July 29th, 2022

Updated on July 31st, 2022

Part of (Re)Learning CS

Talks aboutComputer Science

Looking at the theoretical complexity of computational problems and solutions.

Time Complexity

The most obvious approach to determine the performance of a program is with a timer, returning human-understandable units like nanoseconds or minutes or (hopefully not) days.

However, this doesn't represent the true efficiency of an algorithm, instead showing the performance of its implementation. As this is affected by the test machine, programming language, code quality, etc., we use something better.

Time complexity expresses the execution time of an algorithm as a function of its input size by measuring its number of operations. This provides a purely conceptual understanding of an algorithm's speed at any scale and a gauge for the quality/accuracy of an implementation.

Notation

The three main notations to describe time complexity are:

Generally, we only care about Big-O cos the fastest case of algorithm can be taken for granted. When measuring Big-O, some rules apply:

  1. Constant factors of nn are removed so 3n+23n+2 becomes nn.

    Mathematical reasons apply which I can't quite follow but practically, constant time doesn't matter as nn\to\infty.

  2. Only the largest term of nn is considered, so n2+3n+1n^2+3n+1 becomes n2n^2.

    Again, as nn\to\infty, all but the greatest exponent becomes insignificant.

Graph of time complexities against input size.

(Classes of) Computational Complexity

Time complexity looks at the complexity of a solution, but computational complexity assesses the practical difficulty of solving a computational problem.

There are four common classes of computational complexity:

When tackling a computational problem in the real world, it's ideal for the solution to exist within PP, or to be simplifiable into PP. If not, two options are available: