Incompressibility method
In mathematics, the incompressibility method is a proof method like the probabilistic method, the counting method or the pigeonhole principle. To prove that an object in a certain class (on average) satisfies a certain property, select an object of that class that is incompressible. If it does not satisfy the property, it can be compressed by computable coding. Since it can be generally proven that almost all objects in a given class are incompressible, the argument demonstrates that almost all objects in the class have the property involved (not just the average). To select an incompressible object is ineffective, and cannot be done by a computer program. However, a simple counting argument usually shows that almost all objects of a given class can be compressed by only a few bits (are incompressible).
History
The incompressibility method depends on an objective, fixed notion of incompressibility. Such a notion was provided by the Kolmogorov complexity theory, named for Andrey Kolmogorov.[1]
One of the first uses of the incompressibility method with Kolmogorov complexity in the theory of computation was to prove that the running time of a one-tape Turing machine is quadratic for accepting a palindromic language and sorting algorithms require at least [math]\displaystyle{ n \log n }[/math] time to sort [math]\displaystyle{ n }[/math] items.[2] One of the early influential papers using the incompressibility method was published in 1980.[3] The method was applied to a number of fields, and its name was coined in a textbook.[4]
Applications
Number theory
According to an elegant Euclidean proof, there is an infinite number of prime numbers. Bernhard Riemann demonstrated that the number of primes less than a given number is connected with the 0s of the Riemann zeta function. Jacques Hadamard and Charles Jean de la Vallée-Poussin proved in 1896 that this number of primes is asymptotic to [math]\displaystyle{ n/\ln n }[/math]; see Prime number theorem (use [math]\displaystyle{ \ln }[/math] for the natural logarithm an [math]\displaystyle{ \log }[/math] for the binary logarithm). Using the incompressibility method, G. J. Chaitin argued as follows: Each [math]\displaystyle{ n }[/math] can be described by its prime factorization [math]\displaystyle{ n= p_1^{n_1} \cdots p_k^{n_k} }[/math] (which is unique), where [math]\displaystyle{ p_1, \ldots , p_k }[/math] are the first [math]\displaystyle{ k }[/math] primes which are (at most) [math]\displaystyle{ n }[/math] and the exponents (possibly) 0. Each exponent is (at most) [math]\displaystyle{ \log n }[/math], and can be described by [math]\displaystyle{ \log \log n }[/math] bits. The description of [math]\displaystyle{ n }[/math] can be given in [math]\displaystyle{ k \log \log n }[/math] bits, provided we know the value of [math]\displaystyle{ \log \log n }[/math] (enabling one to parse the consecutive blocks of exponents). To describe [math]\displaystyle{ \log \log n }[/math] requires only [math]\displaystyle{ \log \log \log n }[/math] bits. Using the incompressibility of most positive integers, for each [math]\displaystyle{ k\gt 0 }[/math] there is a positive integer [math]\displaystyle{ n }[/math] of binary length [math]\displaystyle{ l\approx \log n }[/math] which cannot be described in fewer than [math]\displaystyle{ l }[/math] bits. This shows that the number of primes, [math]\displaystyle{ \pi(n) }[/math] less than [math]\displaystyle{ n }[/math], satisfies
- [math]\displaystyle{ \pi(n) \geq \frac{\log n}{\log \log n} -o(1). }[/math]
A more-sophisticated approach attributed to Piotr Berman (present proof partially by John Tromp) describes every incompressible [math]\displaystyle{ n }[/math] by [math]\displaystyle{ k }[/math] and [math]\displaystyle{ n/p_k }[/math], where [math]\displaystyle{ p_k }[/math] is the largest prime number dividing [math]\displaystyle{ n }[/math]. Since [math]\displaystyle{ n }[/math] is incompressible, the length of this description must exceed [math]\displaystyle{ \log n }[/math]. To parse the first block of the description [math]\displaystyle{ k }[/math] must be given in prefix form [math]\displaystyle{ P(k)=\log k +\log \log k +\log \varepsilon(k) }[/math], where [math]\displaystyle{ \varepsilon(k) }[/math] is an arbitrary, small, positive function. Therefore, [math]\displaystyle{ \log p_k \leq P(k) }[/math]. Hence, [math]\displaystyle{ p_k \leq n_k }[/math] with [math]\displaystyle{ n_k=\varepsilon(k)k\log k }[/math] for a special sequence of values [math]\displaystyle{ n_1, n_2, \ldots }[/math]. This shows that the expression below holds for this special sequence, and a simple extension shows that it holds for every [math]\displaystyle{ n \gt 0 }[/math]:
- [math]\displaystyle{ \pi(n) \geq \frac n {\varepsilon(n)\log n}. }[/math]
Both proofs are presented in more detail.[4]
Graph theory
A labeled graph [math]\displaystyle{ G=(V,E) }[/math] with [math]\displaystyle{ n }[/math] nodes can be represented by a string [math]\displaystyle{ E(G) }[/math] of [math]\displaystyle{ {n \choose 2} }[/math] bits, where each bit indicates the presence (or absence) of an edge between the pair of nodes in that position. [math]\displaystyle{ K(G) \geq {n \choose 2} }[/math], and the degree [math]\displaystyle{ d }[/math] of each vertex satisfies
- [math]\displaystyle{ |d-n/2| = O\left(\sqrt{n \log n}\right). }[/math]
To prove this by the incompressibility method, if the deviation is larger we can compress the description of [math]\displaystyle{ G }[/math] below [math]\displaystyle{ K(G) }[/math]; this provides the required contradiction. This theorem is required in a more complicated proof, where the incompressibility argument is used a number of times to show that the number of unlabeled graphs is
- [math]\displaystyle{ \sim \frac{2^{n \choose 2}}{n!}. }[/math][5]
Combinatorics
A transitive tournament is a complete directed graph, [math]\displaystyle{ G=(V,E) }[/math]; if [math]\displaystyle{ (i,j),(j,k) \in E }[/math], [math]\displaystyle{ (i,k) \in E }[/math]. Consider the set of all transitive tournaments on [math]\displaystyle{ n }[/math] nodes. Since a tournament is a labeled, directed complete graph, it can be encoded by a string [math]\displaystyle{ E(G) }[/math] of [math]\displaystyle{ {n \choose 2} }[/math] bits where each bit indicates the direction of the edge between the pair of nodes in that position. Using this encoding, every transitive tournament contains a transitive subtournament on (at least) [math]\displaystyle{ v(n) }[/math] vertices with
- [math]\displaystyle{ v(n) \leq 1+ \lfloor 2 \log n \rfloor. }[/math]
This was shown as the first problem.[6] It is easily solved by the incompressibility method,[7] as are the coin-weighing problem, the number of covering families and expected properties; for example, at least a fraction of [math]\displaystyle{ 1-1/n }[/math] of all transitive tournaments on [math]\displaystyle{ n }[/math] vertices have transitive subtournaments on not more than [math]\displaystyle{ 1+2\lceil2 \log n \rceil }[/math] vertices. [math]\displaystyle{ n }[/math] is large enough.
If a number of events are independent (in probability theory) of one another, the probability that none of the events occur can be easily calculated. If the events are dependent, the problem becomes difficult. Lovász local lemma[8] is a principle that if events are mostly independent of one another and have an individually-small probability, there is a positive probability that none of them will occur.[9] It was proven by the incompressibility method.[10] Using the incompressibility method, several versions of expanders and superconcentrator graphs were shown to exist.[11]
Topological combinatorics
In the Heilbronn triangle problem, throw [math]\displaystyle{ n }[/math] points in the unit square and determine the maximum of the minimal area of a triangle formed by three of the points over all possible arrangements. This problem was solved for small arrangements, and much work was done on asymptotic expression as a function of [math]\displaystyle{ n }[/math]. The original conjecture of Heilbronn was [math]\displaystyle{ O(1/n^2) }[/math] during the early 1950s. Paul Erdős proved that this bound is correct for [math]\displaystyle{ n }[/math], a prime number. The general problem remains unsolved, apart from the best-known lower bound [math]\displaystyle{ \Omega((\log n)/n^2) }[/math] (achievable; hence, Heilbronn's conjecture is not correct for general [math]\displaystyle{ n }[/math]) and upper bound [math]\displaystyle{ \exp(c \sqrt{\log n})/n^{8/7} }[/math] (proven by Komlos, Pintsz and Szemeredi in 1982 and 1981, respectively). Using the incompressibility method, the average case was studied. It was proven that if the area is too small (or large) it can be compressed below the Kolmogorov complexity of a uniformly-random arrangement (high Kolmogorov complexity). This proves that for the overwhelming majority of the arrangements (and the expectation), the area of the smallest triangle formed by three of [math]\displaystyle{ n }[/math] points thrown uniformly at random in the unit square is [math]\displaystyle{ \Theta(1/n^3) }[/math]. In this case, the incompressibility method proves the lower and upper bounds of the property involved.[12]
Probability
The law of the iterated logarithm, the law of large numbers and the recurrence property were shown to hold using the incompressibility method[13] and Kolmogorov's zero–one law,[14] with normal numbers expressed as binary strings (in the sense of E. Borel) and the distribution of 0s and 1s in binary strings of high Kolmogorov complexity.[15]
Turing-machine time complexity
The basic Turing machine, as conceived by Alan Turing in 1936, consists of a memory: a tape of potentially-infinite cells on which a symbol can be written and a finite control, with a read-write head attached, which scans a cell on the tape. At each step, the read-write head can change the symbol in the cell being scanned and move one cell left, right, or not at all according to instruction from the finite control. Turing machines with two tape symbols may be considered for convenience, but this is not essential.
In 1968, F. C. Hennie showed that such a Turing machine requires order [math]\displaystyle{ n^2 }[/math] to recognize the language of binary palindromes in the worst case. In 1977, W. J. Paul[2] presented an incompressibility proof which showed that order [math]\displaystyle{ n^2 }[/math] time is required in the average case. For every integer [math]\displaystyle{ n }[/math], consider all words of that length. For convenience, consider words with the middle third of the word consisting of 0s. The accepting Turing machine ends with an accept state on the left (the beginning of the tape). A Turing-machine computation of a given word gives for each location (the boundary between adjacent cells) a sequence of crossings from left to right and right to left, each crossing in a particular state of the finite control. Positions in the middle third of a candidate word all have a crossing sequence of length [math]\displaystyle{ O(n) }[/math] (with a total computation time of [math]\displaystyle{ O(n^2) }[/math]), or some position has a crossing sequence of [math]\displaystyle{ o(n) }[/math]. In the latter case, the word (if it is a palindrome) can be identified by that crossing sequence.
If other palindromes (ending in an accepting state on the left) have the same crossing sequence, the word (consisting of a prefix up to the position of the involved crossing sequence) of the original palindrome concatenated with a suffix the remaining length of the other palindrome would be accepted as well. Taking the palindrome of [math]\displaystyle{ \Omega(n) }[/math], the Kolmogorov complexity described by [math]\displaystyle{ o(n) }[/math] bits is a contradiction.
Since the overwhelming majority of binary palindromes have a high Kolmogorov complexity, this gives a lower bound on the average-case running time. The result is much more difficult, and shows that Turing machines with [math]\displaystyle{ k+1 }[/math] work tapes are more powerful than those with [math]\displaystyle{ k }[/math] work tapes in real time (here one symbol per step).[3]
In 1984, W. Maass[16] and M. Li and P. M. B. Vitanyi[17] showed that the simulation of two work tapes by one work tape of a Turing machine takes [math]\displaystyle{ \Theta(n^2) }[/math] time deterministically (optimally, solving a 30-year open problem) and [math]\displaystyle{ \Omega (n^2/(\log n \log \log n)) }[/math] time nondeterministically [17] (in,[16] this is [math]\displaystyle{ \Omega (n^2/(\log^2 n \log \log n)) }[/math]. More results concerning tapes, stacks and queues, deterministically and nondeterministically,[17] were proven with the incompressibility method.[4]
Theory of computation
Heapsort is a sorting method, invented by J. W. J. Williams and refined by R. W. Floyd, which always runs in [math]\displaystyle{ O(n \log n) }[/math] time. It is questionable whether Floyd's method is better than Williams' on average, although it is better in the worst case. Using the incompressibility method, it was shown[4] that Williams' method runs on average in [math]\displaystyle{ 2n \log n +O(n) }[/math] time and Floyd's method runs on average in [math]\displaystyle{ n\log n+O(n) }[/math] time. The proof was suggested by Ian Munro.
Shellsort, discovered by Donald Shell in 1959, is a comparison sort which divides a list to be sorted into sublists and sorts them separately. The sorted sublists are then merged, reconstituting a partially-sorted list. This process repeats a number of times (the number of passes). The difficulty of analyzing the complexity of the sorting process is that it depends on the number [math]\displaystyle{ n }[/math] of keys to be sorted, on the number [math]\displaystyle{ p }[/math] of passes and the increments governing the scattering in each pass; the sublist is the list of keys which are the increment parameter apart. Although this sorting method inspired a large number of papers, only the worst case was established. For the average running time, only the best case for a two-pass Shellsort[18] and an upper bound of [math]\displaystyle{ O(n^{23/15}) }[/math] [19] for a particular increment sequence for three-pass Shellsort were established. A general lower bound on an average [math]\displaystyle{ p }[/math]-pass Shellsort was given[20] which was the first advance in this problem in four decades. In every pass, the comparison sort moves a key to another place a certain distance (a path length). All these path lengths are logarithmically coded for length in the correct order (of passes and keys). This allows the reconstruction of the unsorted list from the sorted list. If the unsorted list is incompressible (or nearly so), since the sorted list has near-zero Kolmogorov complexity (and the path lengths together give a certain code length) the sum must be at least as large as the Kolmogorov complexity of the original list. The sum of the path lengths corresponds to the running time, and the running time is lower-bounded in this argument by [math]\displaystyle{ \Omega (pn^{1+1/p}) }[/math]. This was improved to a lower bound of
- [math]\displaystyle{ \Omega \left( n\sum_{k=1}^p h_{k-1}/h_k \right) }[/math]
where [math]\displaystyle{ h_0=n }[/math].[21] This implies for example the Jiang-Li-Vitanyi lower bound for all [math]\displaystyle{ p }[/math]-pass increment sequences and improves that lower bound for particular increment sequences; the Janson-Knuth upper bound is matched by a lower bound for the used increment sequence, showing that three pass Shellsort for this increment sequence uses [math]\displaystyle{ \Theta(n^{23/15}) }[/math] inversions.
Another example is as follows. [math]\displaystyle{ n,r,s }[/math] are natural numbers and [math]\displaystyle{ 2 \log n \leq r,s \leq n/4 }[/math], it was shown that for every [math]\displaystyle{ n }[/math] there is a Boolean [math]\displaystyle{ n \times n }[/math] matrix; every [math]\displaystyle{ s \times (n-r) }[/math] submatrix has a rank at least [math]\displaystyle{ n/2 }[/math] by the incompressibility method.
Logic
According to Gödel's first incompleteness theorem, in every formal system with computably enumerable theorems (or proofs) strong enough to contain Peano arithmetic, there are true (but unprovable) statements or theorems. This is proved by the incompressibility method; every formal system [math]\displaystyle{ F }[/math] can be described finitely (for example, in [math]\displaystyle{ f }[/math] bits). In such a formal system, we can express [math]\displaystyle{ K(x) \geq |x| }[/math] since it contains arithmetic. Given [math]\displaystyle{ F }[/math] and a natural number [math]\displaystyle{ n \gg f }[/math], we can search exhaustively for a proof that some string [math]\displaystyle{ y }[/math] of length [math]\displaystyle{ n }[/math] satisfies [math]\displaystyle{ K(y) \geq n }[/math]. In this way, we obtain the first such string; [math]\displaystyle{ K(y) \leq \log n+f }[/math]: contradiction.[22]
Comparison with other methods
Although the probabilistic method generally shows the existence of an object with a certain property in a class, the incompressibility method tends to show that the overwhelming majority of objects in the class (the average, or the expectation) have that property. It is sometimes easy to turn a probabilistic proof into an incompressibility proof or vice versa. In some cases, it is difficult or impossible to turn a proof by incompressibility into a probabilistic (or counting proof). In virtually all the cases of Turing-machine time complexity cited above, the incompressibility method solved problems which had been open for decades; no other proofs are known. Sometimes a proof by incompressibility can be turned into a proof by counting, as happened in the case of the general lower bound on the running time of Shellsort.[20]
References
- ↑ A. N. Kolmogorov, "Three approaches to the definition of the concept 'quantity of information', Probl. Peredachi Inf., 1:1 (1965), 3–11
- ↑ 2.0 2.1 W. J. Paul, "Kolmogorov's complexity and lower bounds", pp 325–333 in: L. Budach Ed., Proc. 2nd Int. Conf. Fund. Comput. Theory, 1979.
- ↑ 3.0 3.1 W. J. Paul, J. I. Seiferas, J. Simon, "An information-theoretic approach to time bounds for on-line computation" (preliminary version), Proc. 12th ACM Symp. Theory Comput (STOC), 357–367, 1980. doi:10.1016/0022-0000(81)90009-X
- ↑ 4.0 4.1 4.2 4.3 M. Li, P. M. B. Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications, Springer, 1993, 1997, 2008, Chapter 6.
- ↑ H. M. Buhrman, M. Li, J. T. Tromp, P. M. B. Vitanyi, "Kolmogorov random graphs and the incompressibility method", SIAM J. Comput., 29:2(1999), 590–599. doi:10.1137/S0097539797327805
- ↑ P. Erdos, J. Spencer, Probabilistic methods in combinatorics, Academic Press, 1974.
- ↑ M. Li, P. M. B. Vitanyi, "Kolmogorov complexity arguments in combinatorics", J. Combinatorial Theory, Series A, 66:2(1994), 226–236. doi:10.1016/0097-3165(94)90064-7
- ↑ P. Erdős, L. Lovász, "Problems and results on 3-chromatic hypergraphs and some related questions", in A. Hajnal, R. Rado, and V. T. Sós, eds. Infinite and Finite Sets (to Paul Erdős on his 60th birthday). North-Holland. pp. 609–627.
- ↑ R. A. Moser, G. Tardos, "A constructive proof of the general lovász local lemma", Journal of the ACM (JACM), 2:57(2010), 11. doi:10.1145/1667053.1667060
- ↑ L. Fortnow, "A Kolmogorov Complexity Proof of the Lovász Local Lemma", Computational Complexity Weblog, 2 June 2009.
- ↑ U. Schoning, "Construction of expanders and superconcentrators using Kolmogorov complexity", Random Structures & Algorithms, 17:1(2000), 64–77. doi:10.1002/1098-2418(200008)17:1<64::AID-RSA5>3.0.CO;2-3
- ↑ T. Jiang, M. Li, P. M. B. Vitanyi, "The average‐case area of Heilbronn‐type triangles", Random Structures & Algorithms, 20:2(2002), 206–219. doi:10.1002/rsa.10024
- ↑ V. G. Vovk, "The law of the iterated logarithm for random Kolmogorov, or chaotic, sequences", Theory Probab. Appl. 3:32(1988), 413–426. doi:10.1137/1132061
- ↑ M. Zimand, "A high-low Kolmogorov complexity law equivalent to the 0–1 law", Inform. Process. Letters, 57:2(1996), 59–84. doi:10.1016/0020-0190(95)00201-4
- ↑ M. Li, P. M. B. Vitanyi, "Statistical properties of finite sequences with high Kolmogorov complexity", Mathematical Systems Theory, 27(1994), 365–376. doi:10.1007/BF01192146
- ↑ 16.0 16.1 W. Maass, "Combinatorial lower bound arguments for deterministic and nondeterministic Turing machines", Trans. Amer. Math. Soc. 292 (1985), 675–693. doi:10.1090/S0002-9947-1985-0808746-4
- ↑ 17.0 17.1 17.2 M. Li, P. M. B. Vitanyi, "Tape versus queue and stacks: The lower bounds", Information and Computation, 78:1(1988), 56–85. doi:10.1016/0890-5401(88)90003-X
- ↑ D. E. Knuth, Sorting and Searching (Vol. 3 The Art of Computer Programming), 2nd Ed. Addison-Wesley, 1998, pp 83–95. ISBN:0201896850
- ↑ S. Janson, D. E. Knuth, "Shellsort with three increments", Random Structures Algorithms 10:1–2(1997), 125–142. arXiv:cs/9608105
- ↑ 20.0 20.1 T. Jiang, M. Li, P. M. B. Vitanyi, "A lower bound on the average-case complexity of Shellsort", Journal of the ACM (JACM), 47:5(2000) 905–911. doi:10.1145/355483.355488
- ↑ P.M.B. Vitanyi (2018), On the average‐case complexity of Shellsort, Random Structures and Algorithms, 52:2, 354–363 doi:10.1002/rsa.20737
- ↑ G. J. Chaitin, Algorithmic Information Theory, Cambridge University Press, 1977.
Original source: https://en.wikipedia.org/wiki/Incompressibility method.
Read more |