Kolmogorov structure function
In 1973, Andrey Kolmogorov proposed a non-probabilistic approach to statistics and model selection. Let each datum be a finite binary string and a model be a finite set of binary strings. Consider model classes consisting of models of given maximal Kolmogorov complexity. The Kolmogorov structure function of an individual data string expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. The structure function determines all stochastic properties of the individual data string: for every constrained model class it determines the individual best-fitting model in the class irrespective of whether the true model is in the model class considered or not. In the classical case we talk about a set of data with a probability distribution, and the properties are those of the expectations. In contrast, here we deal with individual data strings and the properties of the individual string focused on. In this setting, a property holds with certainty rather than with high probability as in the classical case. The Kolmogorov structure function precisely quantifies the goodness-of-fit of an individual model with respect to individual data.
The Kolmogorov structure function is used in the algorithmic information theory, also known as the theory of Kolmogorov complexity, for describing the structure of a string by use of models of increasing complexity.
Kolmogorov's definition
The structure function was originally proposed by Kolmogorov in 1973 at a Soviet Information Theory symposium in Tallinn, but these results were not published[1] p. 182. But the results were announced in[2] in 1974, the only written record by Kolmogorov himself. One of his last scientific statements is (translated from the original Russian by L.A. Levin):
To each constructive object corresponds a function [math]\displaystyle{ \Phi_x(k) }[/math] of a natural number k—the log of minimal cardinality of x-containing sets that allow definitions of complexity at most k. If the element x itself allows a simple definition, then the function [math]\displaystyle{ \Phi }[/math] drops to 0 even for small k. Lacking such definition, the element is "random" in a negative sense. But it is positively "probabilistically random" only when function [math]\displaystyle{ \Phi }[/math] having taken the value [math]\displaystyle{ \Phi_0 }[/math] at a relatively small [math]\displaystyle{ k=k_0 }[/math], then changes approximately as [math]\displaystyle{ \Phi(k)=\Phi_0-(k-k_0) }[/math].—Kolmogorov, announcement cited above
Contemporary definition
It is discussed in Cover and Thomas.[1] It is extensively studied in Vereshchagin and Vitányi[3] where also the main properties are resolved. The Kolmogorov structure function can be written as
- [math]\displaystyle{ h_{x}(\alpha) = \min_S \{\log |S| : x \in S , K(S) \leq \alpha \} }[/math]
where [math]\displaystyle{ x }[/math] is a binary string of length [math]\displaystyle{ n }[/math] with [math]\displaystyle{ x \in S }[/math] where [math]\displaystyle{ S }[/math] is a contemplated model (set of n-length strings) for [math]\displaystyle{ x }[/math], [math]\displaystyle{ K(S) }[/math] is the Kolmogorov complexity of [math]\displaystyle{ S }[/math] and [math]\displaystyle{ \alpha }[/math] is a nonnegative integer value bounding the complexity of the contemplated [math]\displaystyle{ S }[/math]'s. Clearly, this function is nonincreasing and reaches [math]\displaystyle{ \log |\{x\}| =0 }[/math] for [math]\displaystyle{ \alpha = K(x)+c }[/math] where [math]\displaystyle{ c }[/math] is the required number of bits to change [math]\displaystyle{ x }[/math] into [math]\displaystyle{ \{x\} }[/math] and [math]\displaystyle{ K(x) }[/math] is the Kolmogorov complexity of [math]\displaystyle{ x }[/math].
The algorithmic sufficient statistic
We define a set [math]\displaystyle{ S }[/math] containing [math]\displaystyle{ x }[/math] such that
- [math]\displaystyle{ K(S)+K(x|S)=K(x)+O(1) }[/math].
The function [math]\displaystyle{ h_x(\alpha) }[/math] never decreases more than a fixed independent constant below the diagonal called sufficiency line L defined by
- [math]\displaystyle{ L(\alpha)+\alpha = K(x) }[/math].
It is approached to within a constant distance by the graph of [math]\displaystyle{ h_x }[/math] for certain arguments (for instance, for [math]\displaystyle{ \alpha = K(x)+c }[/math]). For these [math]\displaystyle{ \alpha }[/math]'s we have [math]\displaystyle{ \alpha + h_x (\alpha) = K(x)+O(1) }[/math] and the associated model [math]\displaystyle{ S }[/math] (witness for [math]\displaystyle{ h_x(\alpha) }[/math]) is called an optimal set for [math]\displaystyle{ x }[/math], and its description of [math]\displaystyle{ K(S)\leq \alpha }[/math] bits is therefore an algorithmic sufficient statistic. We write `algorithmic' for `Kolmogorov complexity' by convention. The main properties of an algorithmic sufficient statistic are the following: If [math]\displaystyle{ S }[/math] is an algorithmic sufficient statistic for [math]\displaystyle{ x }[/math], then
- [math]\displaystyle{ K(S)+\log |S| = K(x)+O(1) }[/math].
That is, the two-part description of [math]\displaystyle{ x }[/math] using the model [math]\displaystyle{ S }[/math] and as data-to-model code the index of [math]\displaystyle{ x }[/math] in the enumeration of [math]\displaystyle{ S }[/math] in [math]\displaystyle{ \log |S| }[/math] bits, is as concise as the shortest one-part code of [math]\displaystyle{ x }[/math] in [math]\displaystyle{ K(x) }[/math] bits. This can be easily seen as follows:
- [math]\displaystyle{ K(x) \leq K(x,S) +O(1) \leq K(S)+K(x|S)+O(1) \leq K(S)+\log|S|+O(1) \leq K(x)+O(1) }[/math],
using straightforward inequalities and the sufficiency property, we find that [math]\displaystyle{ K(x|S)=\log |S| +O(1) }[/math]. (For example, given [math]\displaystyle{ S \ni x }[/math], we can describe [math]\displaystyle{ x }[/math] self-delimitingly (you can determine its end) in [math]\displaystyle{ \log |S|+O(1) }[/math] bits.) Therefore, the randomness deficiency [math]\displaystyle{ \log |S|-K(x|S) }[/math] of [math]\displaystyle{ x }[/math] in [math]\displaystyle{ S }[/math] is a constant, which means that [math]\displaystyle{ x }[/math] is a typical (random) element of S. However, there can be models [math]\displaystyle{ S }[/math] containing [math]\displaystyle{ x }[/math] that are not sufficient statistics. An algorithmic sufficient statistic [math]\displaystyle{ S }[/math] for [math]\displaystyle{ x }[/math] has the additional property, apart from being a model of best fit, that [math]\displaystyle{ K(x,S)=K(x)+O(1) }[/math] and therefore by the Kolmogorov complexity symmetry of information (the information about [math]\displaystyle{ x }[/math] in [math]\displaystyle{ S }[/math] is about the same as the information about [math]\displaystyle{ S }[/math] in x) we have [math]\displaystyle{ K(S|x^*)=O(1) }[/math]: the algorithmic sufficient statistic [math]\displaystyle{ S }[/math] is a model of best fit that is almost completely determined by [math]\displaystyle{ x }[/math]. ([math]\displaystyle{ x^* }[/math] is a shortest program for [math]\displaystyle{ x }[/math].) The algorithmic sufficient statistic associated with the least such [math]\displaystyle{ \alpha }[/math] is called the algorithmic minimal sufficient statistic.
With respect to the picture: The MDL structure function [math]\displaystyle{ \lambda_x(\alpha) }[/math] is explained below. The Goodness-of-fit structure function [math]\displaystyle{ \beta_x(\alpha) }[/math] is the least randomness deficiency (see above) of any model [math]\displaystyle{ S \ni x }[/math] for [math]\displaystyle{ x }[/math] such that [math]\displaystyle{ K(S) \leq \alpha }[/math]. This structure function gives the goodness-of-fit of a model [math]\displaystyle{ S }[/math] (containing x) for the string x. When it is low the model fits well, and when it is high the model doesn't fit well. If [math]\displaystyle{ \beta_x (\alpha) =0 }[/math] for some [math]\displaystyle{ \alpha }[/math] then there is a typical model [math]\displaystyle{ S \ni x }[/math] for [math]\displaystyle{ x }[/math] such that [math]\displaystyle{ K(S) \leq \alpha }[/math] and [math]\displaystyle{ x }[/math] is typical (random) for S. That is, [math]\displaystyle{ S }[/math] is the best-fitting model for x. For more details see[1] and especially[3] and.[4]
Selection of properties
Within the constraints that the graph goes down at an angle of at least 45 degrees, that it starts at n and ends approximately at [math]\displaystyle{ K(x) }[/math], every graph (up to a [math]\displaystyle{ O(\log n) }[/math] additive term in argument and value) is realized by the structure function of some data x and vice versa. Where the graph hits the diagonal first the argument (complexity) is that of the minimum sufficient statistic. It is incomputable to determine this place. See.[3]
Main property
It is proved that at each level [math]\displaystyle{ \alpha }[/math] of complexity the structure function allows us to select the best model [math]\displaystyle{ S }[/math] for the individual string x within a strip of [math]\displaystyle{ O(\log n) }[/math] with certainty, not with great probability.[3]
The MDL variant
The Minimum description length (MDL) function: The length of the minimal two-part code for x consisting of the model cost K(S) and the length of the index of x in S, in the model class of sets of given maximal Kolmogorov complexity [math]\displaystyle{ \alpha }[/math], the complexity of S upper bounded by [math]\displaystyle{ \alpha }[/math], is given by the MDL function or constrained MDL estimator:
- [math]\displaystyle{ \lambda_{x}(\alpha) = \min_{S} \{\Lambda(S): S \ni x,\; K(S) \leq \alpha\}, }[/math]
where [math]\displaystyle{ \Lambda(S)=\log|S|+K(S) \ge K(x)-O(1) }[/math] is the total length of two-part code of x with help of model S.
Main property
It is proved that at each level [math]\displaystyle{ \alpha }[/math] of complexity the structure function allows us to select the best model S for the individual string x within a strip of [math]\displaystyle{ O(\log n) }[/math] with certainty, not with great probability.[3]
Application in statistics
The mathematics developed above were taken as the foundation of MDL by its inventor Jorma Rissanen.[5]
Probability models
For every computable probability distribution [math]\displaystyle{ P }[/math] it can be proved[6] that
- [math]\displaystyle{ -\log P(x) = \log |S|+O(\log n) }[/math].
For example, if [math]\displaystyle{ P }[/math] is some computable distribution on the set [math]\displaystyle{ S }[/math] of strings of length [math]\displaystyle{ n }[/math], then each [math]\displaystyle{ x \in S }[/math] has probability [math]\displaystyle{ P(x)=\exp(O(\log n))/|S|=n^{O(1)}/|S| }[/math]. Kolmogorov's structure function becomes
- [math]\displaystyle{ h'_{x}(\alpha) = \min_P \{-\log P(x) : P(x)\gt 0, K(P) \leq \alpha \} }[/math]
where x is a binary string of length n with [math]\displaystyle{ -\log P(x)\gt 0 }[/math] where [math]\displaystyle{ P }[/math] is a contemplated model (computable probability of [math]\displaystyle{ n }[/math]-length strings) for [math]\displaystyle{ x }[/math], [math]\displaystyle{ K(P) }[/math] is the Kolmogorov complexity of [math]\displaystyle{ P }[/math] and [math]\displaystyle{ \alpha }[/math] is an integer value bounding the complexity of the contemplated [math]\displaystyle{ P }[/math]'s. Clearly, this function is non-increasing and reaches [math]\displaystyle{ \log |\{x\}| =0 }[/math] for [math]\displaystyle{ \alpha = K(x)+c }[/math] where c is the required number of bits to change [math]\displaystyle{ x }[/math] into [math]\displaystyle{ \{x\} }[/math] and [math]\displaystyle{ K(x) }[/math] is the Kolmogorov complexity of [math]\displaystyle{ x }[/math]. Then [math]\displaystyle{ h'_{x}(\alpha)=h_{x}(\alpha)+O(\log n) }[/math]. For every complexity level [math]\displaystyle{ \alpha }[/math] the function [math]\displaystyle{ h'_{x}(\alpha) }[/math] is the Kolmogorov complexity version of the maximum likelihood (ML).
Main property
It is proved that at each level [math]\displaystyle{ \alpha }[/math] of complexity the structure function allows us to select the best model [math]\displaystyle{ S }[/math] for the individual string [math]\displaystyle{ x }[/math] within a strip of [math]\displaystyle{ O(\log n) }[/math] with certainty, not with great probability.[3]
The MDL variant and probability models
The MDL function: The length of the minimal two-part code for x consisting of the model cost K(P) and the length of [math]\displaystyle{ - \log P(x) }[/math], in the model class of computable probability mass functions of given maximal Kolmogorov complexity [math]\displaystyle{ \alpha }[/math], the complexity of P upper bounded by [math]\displaystyle{ \alpha }[/math], is given by the MDL function or constrained MDL estimator:
- [math]\displaystyle{ \lambda'_{x}(\alpha) = \min_{P} \{\Lambda(P): P(x)\gt 0,\; K(P) \leq \alpha\}, }[/math]
where [math]\displaystyle{ \Lambda(P)=-\log P(x)+K(P) \geq K(x)-O(1) }[/math] is the total length of two-part code of x with help of model P.
Main property
It is proved that at each level [math]\displaystyle{ \alpha }[/math] of complexity the MDL function allows us to select the best model P for the individual string x within a strip of [math]\displaystyle{ O(\log n) }[/math] with certainty, not with great probability.[3]
Extension to rate distortion and denoising
It turns out that the approach can be extended to a theory of rate distortion of individual finite sequences and denoising of individual finite sequences[7] using Kolmogorov complexity. Experiments using real compressor programs have been carried out with success.[8] Here the assumption is that for natural data the Kolmogorov complexity is not far from the length of a compressed version using a good compressor.
References
- ↑ 1.0 1.1 1.2 Cover, Thomas M.; Thomas, Joy A. (1991). Elements of information theory. New York: Wiley. pp. 175–178. ISBN 978-0471062592. https://archive.org/details/elementsofinform0000cove.
- ↑ Abstract of a talk for the Moscow Mathematical Society in Uspekhi Mat. Nauk Volume 29, Issue 4(178) in the Communications of the Moscow Mathematical Society page 155 (in the Russian edition, not translated into English)
- ↑ 3.0 3.1 3.2 3.3 3.4 3.5 3.6 Vereshchagin, N.K.; Vitanyi, P.M.B. (1 December 2004). "Kolmogorov's Structure Functions and Model Selection". IEEE Transactions on Information Theory 50 (12): 3265–3290. doi:10.1109/TIT.2004.838346.
- ↑ Gacs, P.; Tromp, J.T.; Vitanyi, P.M.B. (2001). "Algorithmic statistics". IEEE Transactions on Information Theory 47 (6): 2443–2463. doi:10.1109/18.945257.
- ↑ Rissanen, Jorma (2007). Information and complexity in statistical modeling (Online-Ausg. ed.). New York: Springer. ISBN 978-0-387-36610-4. https://www.springer.com/computer/theoretical+computer+science/book/978-0-387-36610-4.
- ↑ A.Kh. Shen, The concept of (α, β)-stochasticity in the Kolmogorov sense, and its properties, Soviet Math. Dokl., 28:1(1983), 295--299
- ↑ Vereshchagin, Nikolai K.; Vitanyi, Paul M.B. (1 July 2010). "Rate Distortion and Denoising of Individual Data Using Kolmogorov Complexity". IEEE Transactions on Information Theory 56 (7): 3438–3454. doi:10.1109/TIT.2010.2048491.
- ↑ de Rooij, Steven; Vitanyi, Paul (1 March 2012). "Approximating Rate-Distortion Graphs of Individual Data: Experiments in Lossy Compression and Denoising". IEEE Transactions on Computers 61 (3): 395–407. doi:10.1109/TC.2011.25.
Literature
- Cover, T.M.; P. Gacs; R.M. Gray (1989). "Kolmogorov's contributions to Information Theory and Algorithmic Complexity". Annals of Probability 17 (3): 840–865. doi:10.1214/aop/1176991250.
- Kolmogorov, A. N.; Uspenskii, V. A. (1 January 1987). "Algorithms and Randomness". Theory of Probability and Its Applications 32 (3): 389–412. doi:10.1137/1132060. http://epubs.siam.org/tvp/resource/1/tprbau/v32/i3/p389_s1.
- Li, M., Vitányi, P.M.B. (2008). An introduction to Kolmogorov complexity and its applications (3rd ed.). New York: Springer. ISBN 978-0387339986., Especially pp. 401–431 about the Kolmogorov structure function, and pp. 613–629 about rate distortion and denoising of individual sequences.
- Shen, A. (1 April 1999). "Discussion on Kolmogorov Complexity and Statistical Analysis". The Computer Journal 42 (4): 340–342. doi:10.1093/comjnl/42.4.340.
- V'yugin, V.V. (1987). "On Randomness Defect of a Finite Object Relative to Measures with Given Complexity Bounds". Theory of Probability and Its Applications 32 (3): 508–512. doi:10.1137/1132071. http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=tvp&paperid=1451&option_lang=eng.
- V'yugin, V. V. (1 April 1999). "Algorithmic Complexity and Stochastic Properties of Finite Binary Sequences". The Computer Journal 42 (4): 294–317. doi:10.1093/comjnl/42.4.294.
Original source: https://en.wikipedia.org/wiki/Kolmogorov structure function.
Read more |