Worst-case complexity

From HandWiki
Revision as of 21:25, 6 February 2024 by Pchauhan2001 (talk | contribs) (url)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Upper bound on resources required by an algorithm

In computer science (specifically computational complexity theory), the worst-case complexity measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary size (commonly denoted as n in asymptotic notation). It gives an upper bound on the resources required by the algorithm.

In the case of running time, the worst-case time complexity indicates the longest running time performed by an algorithm given any input of size n, and thus guarantees that the algorithm will finish in the indicated period of time. The order of growth (e.g. linear, logarithmic) of the worst-case complexity is commonly used to compare the efficiency of two algorithms.

The worst-case complexity of an algorithm should be contrasted with its average-case complexity, which is an average measure of the amount of resources the algorithm uses on a random input.

Definition

Given a model of computation and an algorithm [math]\displaystyle{ \mathsf{A} }[/math] that halts on each input [math]\displaystyle{ s }[/math], the mapping [math]\displaystyle{ t_{\mathsf{A}} \colon \{0, 1\}^\star \to \N }[/math] is called the time complexity of [math]\displaystyle{ \mathsf{A} }[/math] if, for every input string [math]\displaystyle{ s }[/math], [math]\displaystyle{ \mathsf{A} }[/math] halts after exactly [math]\displaystyle{ t_{\mathsf{A}}(s) }[/math] steps.

Since we usually are interested in the dependence of the time complexity on different input lengths, abusing terminology, the time complexity is sometimes referred to the mapping [math]\displaystyle{ t_{\mathsf{A}} \colon \N \to \N }[/math], defined by the maximal complexity

[math]\displaystyle{ t_{\mathsf{A}}(n) := \max_{s\in \{0, 1\}^n} t_{\mathsf{A}}(s) }[/math]

of inputs [math]\displaystyle{ s }[/math] with length or size [math]\displaystyle{ \le n }[/math].

Similar definitions can be given for space complexity, randomness complexity, etc.

Ways of speaking

Very frequently, the complexity [math]\displaystyle{ t_{\mathsf{A}} }[/math] of an algorithm [math]\displaystyle{ \mathsf{A} }[/math] is given in asymptotic Big-O Notation, which gives its growth rate in the form [math]\displaystyle{ t_{\mathsf{A}} = O(g(n)) }[/math] with a certain real valued comparison function [math]\displaystyle{ g(n) }[/math] and the meaning:

  • There exists a positive real number [math]\displaystyle{ M }[/math] and a natural number [math]\displaystyle{ n_0 }[/math] such that
[math]\displaystyle{ |t_{\mathsf{A}}(n)| \le M g(n) \quad \text{ for all } n\ge n_0. }[/math]

Quite frequently, the wording is:

  • „Algorithm [math]\displaystyle{ \mathsf{A} }[/math] has the worst-case complexity [math]\displaystyle{ O(g(n)) }[/math].“

or even only:

  • „Algorithm [math]\displaystyle{ \mathsf{A} }[/math] has complexity [math]\displaystyle{ O(g(n)) }[/math].“

Examples

Consider performing insertion sort on [math]\displaystyle{ n }[/math] numbers on a random-access machine. The best-case for the algorithm is when the numbers are already sorted, which takes [math]\displaystyle{ O(n) }[/math] steps to perform the task. However, the input in the worst-case for the algorithm is when the numbers are reverse sorted and it takes [math]\displaystyle{ O(n^2) }[/math] steps to sort them; therefore the worst-case time-complexity of insertion sort is of [math]\displaystyle{ O(n^2) }[/math].

See also

References