Information distance

From HandWiki
Revision as of 00:29, 26 October 2021 by imported>WikiG (url)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Information distance is the distance between two finite objects (represented as computer files) expressed as the number of bits in the shortest program which transforms one object into the other one or vice versa on a universal computer. This is an extension of Kolmogorov complexity.[1] The Kolmogorov complexity of a single finite object is the information in that object; the information distance between a pair of finite objects is the minimum information required to go from one object to the other or vice versa. Information distance was first defined and investigated in [2] based on thermodynamic principles, see also.[3] Subsequently, it achieved final form in.[4] It is applied in the normalized compression distance and the normalized Google distance.

Properties

Formally the information distance [math]\displaystyle{ ID(x,y) }[/math] between [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] is defined by

[math]\displaystyle{ ID(x,y) = \min \{|p|: p(x)=y \; \& \;p(y) =x \}, }[/math]

with [math]\displaystyle{ p }[/math] a finite binary program for the fixed universal computer with as inputs finite binary strings [math]\displaystyle{ x,y }[/math]. In [4] it is proven that [math]\displaystyle{ ID(x,y) = E(x,y)+O(\log \cdot \max \{K(x\mid y), K(y\mid x)\} ) }[/math] with

[math]\displaystyle{ E(x,y) = \max \{K(x\mid y), K(y\mid x)\}, }[/math]

where [math]\displaystyle{ K(\cdot \mid \cdot) }[/math] is the Kolmogorov complexity defined by [1] of the prefix type.[5] This [math]\displaystyle{ E(x,y) }[/math] is the important quantity.

Universality

Let [math]\displaystyle{ \Delta }[/math] be the class of upper semicomputable distances [math]\displaystyle{ D(x,y) }[/math] that satisfy the density condition

[math]\displaystyle{ \sum_{x:x \neq y} 2^{-D(x,y)} \leq 1 , \; \sum_{y:y \neq x} 2^{-D(x,y)} \leq 1, }[/math]

This excludes irrelevant distances such as [math]\displaystyle{ D(x,y)= \frac{1}{2} }[/math] for [math]\displaystyle{ x\neq y }[/math]; it takes care that if the distance growth then the number of objects within that distance of a given object grows. If [math]\displaystyle{ D \in \Delta }[/math] then [math]\displaystyle{ E(x,y) \leq D(x,y) }[/math] up to a constant additive term.[4] The probabilistic expressions of the distance is the first cohomological class in information symmetric cohomology,[6] which may be conceived as a universality property.

Metricity

The distance [math]\displaystyle{ E(x,y) }[/math] is a metric up to an additive [math]\displaystyle{ O(\log .\max \{K(x\mid y), K(y\mid x)\} ) }[/math] term in the metric (in)equalities.[4] The probabilistic version of the metric is indeed unique has shown by Han in 1981.[7]

Maximum overlap

If [math]\displaystyle{ E(x,y) = K(x\mid y) }[/math], then there is a program [math]\displaystyle{ p }[/math] of length [math]\displaystyle{ K(x\mid y) }[/math] that converts [math]\displaystyle{ y }[/math] to [math]\displaystyle{ x }[/math], and a program [math]\displaystyle{ q }[/math] of length [math]\displaystyle{ K(y\mid x)-K(x\mid y) }[/math] such that the program [math]\displaystyle{ qp }[/math] converts [math]\displaystyle{ x }[/math] to [math]\displaystyle{ y }[/math]. (The programs are of the self-delimiting format which means that one can decide where one program ends and the other begins in concatenation of the programs.) That is, the shortest programs to convert between two objects can be made maximally overlapping: For [math]\displaystyle{ K(x\mid y) \leq K(y\mid x) }[/math] it can be divided into a program that converts object [math]\displaystyle{ x }[/math] to object [math]\displaystyle{ y }[/math], and another program which concatenated with the first converts [math]\displaystyle{ y }[/math] to [math]\displaystyle{ x }[/math] while the concatenation of these two programs is a shortest program to convert between these objects.[4]

Minimum overlap

The programs to convert between objects [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] can also be made minimal overlapping. There exists a program [math]\displaystyle{ p }[/math] of length [math]\displaystyle{ K(x\mid y) }[/math] up to an additive term of [math]\displaystyle{ O(\log (\max \{K(x\mid y), K(y\mid x)\}) ) }[/math] that maps [math]\displaystyle{ y }[/math] to [math]\displaystyle{ x }[/math] and has small complexity when [math]\displaystyle{ x }[/math] is known ([math]\displaystyle{ K(p\mid x)\approx 0 }[/math]). Interchanging the two objects we have the other program[8] Having in mind the parallelism between Shannon information theory and Kolmogorov complexity theory, one can say that this result is parallel to the Slepian-Wolf and Körner–Imre Csiszár–Marton theorems.

Applications

Theoretical

The result of An.A. Muchnik on minimum overlap above is an important theoretical application showing that certain codes exist: to go to finite target object from any object there is a program which almost only depends on the target object! This result is fairly precise and the error term cannot be significantly improved.[9] Information distance was material in the textbook,[10] it occurs in the Encyclopedia on Distances.[11]

Practical

To determine the similarity of objects such as genomes, languages, music, internet attacks and worms, software programs, and so on, information distance is normalized and the Kolmogorov complexity terms approximated by real-world compressors (the Kolmogorov complexity is a lower bound to the length in bits of a compressed version of the object). The result is the normalized compression distance (NCD) between the objects. This pertains to objects given as computer files like the genome of a mouse or text of a book. If the objects are just given by name such as `Einstein' or `table' or the name of a book or the name `mouse', compression does not make sense. We need outside information about what the name means. Using a data base (such as the internet) and a means to search the database (such as a search engine like Google) provides this information. Every search engine on a data base that provides aggregate page counts can be used in the normalized Google distance (NGD). A python package for computing all information distances and volumes, multivariate mutual information, conditional mutual information, joint entropies, total correlations, in a dataset of n variables is available .[12]

References

  1. 1.0 1.1 A.N. Kolmogorov, Three approaches to the quantitative definition of information, Problems Inform. Transmission, 1:1(1965), 1–7
  2. M. Li, P.M.B. Vitanyi, Theory of Thermodynamics of Computation, Proc. IEEE Physics of Computation Workshop, Dallas, Texas, USA, 1992, 42–46
  3. M. Li, P.M.B. Vitanyi, Reversibility and Adiabatic Computation: Trading Time and Space for Energy, Proc. R. Soc. Lond. A 9 April 1996 vol. 452 no. 1947 769–789
  4. 4.0 4.1 4.2 4.3 4.4 C.H. Bennett, P. Gacs, M. Li, P.M.B. Vitanyi, W. Zurek, Information distance, IEEE Transactions on Information Theory, 44:4(1998), 1407–1423
  5. L.A. Levin, Laws of Information Conservation (Nongrowth) and Aspects of the Foundation of Probability Theory, Problems Inform. Transmission, 10:3(1974), 30–35
  6. P. Baudot, The Poincaré-Shannon Machine: Statistical Physics and Machine Learning Aspects of Information Cohomology , Entropy, 21:9 - 881 (2019)
  7. Te Sun Han, A uniqueness of Shannon information distance and related nonnegativity problems, Journal of combinatorics. 6:4 p.320-331 (1981), 30–35
  8. Muchnik, Andrej A. (2002). "Conditional complexity and codes". Theoretical Computer Science 271 (1–2): 97–109. doi:10.1016/S0304-3975(01)00033-0. 
  9. N.K Vereshchagin, M.V. Vyugin, Independent minimum length programs to translate between given strings, Proc. 15th Ann. Conf. Computational Complexity, 2000, 138–144
  10. M.Hutter, Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability, Springer, 1998
  11. M.M. Deza, E Deza, Encyclopedia of Distances, Springer, 2009, doi:10.1007/978-3-642-00234-2
  12. "InfoTopo: Topological Information Data Analysis. Deep statistical unsupervised and supervised learning - File Exchange - Github". https://infotopo.readthedocs.io/en/latest/index.html. 

Related literature

  • Arkhangel'skii, A. V.; Pontryagin, L. S. (1990), General Topology I: Basic Concepts and Constructions Dimension Theory, Encyclopaedia of Mathematical Sciences, Springer, ISBN 3-540-18178-4