Stochastic chains with memory of variable length

From HandWiki
Revision as of 13:41, 6 February 2024 by MedAI (talk | contribs) (linkage)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Stochastic chain family

Stochastic chains with memory of variable length are a family of stochastic chains of finite order in a finite alphabet, such as, for every time pass, only one finite suffix of the past, called context, is necessary to predict the next symbol. These models were introduced in the information theory literature by Jorma Rissanen in 1983,[1] as a universal tool to data compression, but recently have been used to model data in different areas such as biology,[2] linguistics[3] and music.[4]

Definition

A stochastic chain with memory of variable length is a stochastic chain [math]\displaystyle{ (X_n)_{n\in Z} }[/math], taking values in a finite alphabet [math]\displaystyle{ A }[/math], and characterized by a probabilistic context tree [math]\displaystyle{ (\tau,p) }[/math], so that

  • [math]\displaystyle{ \tau }[/math] is the group of all contexts. A context [math]\displaystyle{ X_{n-l},\ldots,X_{n-1} }[/math], being [math]\displaystyle{ l }[/math] the size of the context, is a finite portion of the past [math]\displaystyle{ X_{-\infty},\ldots,X_{n-1} }[/math], which is relevant to predict the next symbol [math]\displaystyle{ X_{n} }[/math];
  • [math]\displaystyle{ p }[/math] is a family of transition probabilities associated with each context.

History

The class of stochastic chains with memory of variable length was introduced by Jorma Rissanen in the article A universal system for data compression system.[1] Such class of stochastic chains was popularized in the statistical and probabilistic community by P. Bühlmann and A. J. Wyner in 1999, in the article Variable Length Markov Chains. Named by Bühlmann and Wyner as “variable length Markov chains” (VLMC), these chains are also known as “variable-order Markov models" (VOM), “probabilistic suffix trees[2] and “context tree models”.[5] The name “stochastic chains with memory of variable length” seems to have been introduced by Galves and Löcherbach, in 2008, in the article of the same name.[6]

Examples

Interrupted light source

Consider a system by a lamp, an observer and a door between both of them. The lamp has two possible states: on, represented by 1, or off, represented by 0. When the lamp is on, the observer may see the light through the door, depending on which state the door is at the time: open, 1, or closed, 0. such states are independent of the original state of the lamp.

Let [math]\displaystyle{ (X_n)_{n\geq 0} }[/math] a Markov chain that represents the state of the lamp, with values in [math]\displaystyle{ A={0,1} }[/math] and let [math]\displaystyle{ p }[/math] be a probability transition matrix. Also, let [math]\displaystyle{ (\xi _n)_{n\geq 0} }[/math] be a sequence of independent random variables that represents the door's states, also taking values in [math]\displaystyle{ A }[/math], independent of the chain [math]\displaystyle{ (X_n)_{n\geq 0} }[/math] and such that

[math]\displaystyle{ \mathbb{P}(\xi_n = 1) = 1 - \varepsilon }[/math]

where [math]\displaystyle{ 0 \lt \epsilon \lt 1 }[/math]. Define a new sequence [math]\displaystyle{ (Z_n)_{n \ge 0} }[/math] such that

[math]\displaystyle{ Z_n = X_n \xi_n }[/math] for every [math]\displaystyle{ (Z_n)_{n \ge 0}. }[/math]

In order to determine the last instant that the observer could see the lamp on, i.e. to identify the least instant [math]\displaystyle{ k }[/math], with [math]\displaystyle{ k\lt n }[/math] in which [math]\displaystyle{ Z_k=1 }[/math].

Using a context tree it's possible to represent the past states of the sequence, showing which are relevant to identify the next state.

The stochastic chain [math]\displaystyle{ (Z_n)_{n\in\mathbb{Z}} }[/math] is, then, a chain with memory of variable length, taking values in [math]\displaystyle{ A }[/math] and compatible with the probabilistic context tree [math]\displaystyle{ (\tau,p) }[/math], where

[math]\displaystyle{ \tau = \{1,10,100,\cdots\} \cup \{0^\infty\}. }[/math]

Inferences in chains with variable length

Given a sample [math]\displaystyle{ X_{l},\ldots,X_{n} }[/math], one can find the appropriated context tree using the following algorithms.

The context algorithm

In the article A Universal Data Compression System,[1] Rissanen introduced a consistent algorithm to estimate the probabilistic context tree that generates the data. This algorithm's function can be summarized in two steps:

  1. Given the sample produced by a chain with memory of variable length, we start with the maximum tree whose branches are all the candidates to contexts to the sample;
  2. The branches in this tree are then cut until you obtain the smallest tree that's well adapted to the data. Deciding whether or not shortening the context is done through a given gain function, such as the ratio of the log-likelihood.

Be [math]\displaystyle{ X_{0},\ldots,X_{n-1} }[/math] a sample of a finite probabilistic tree [math]\displaystyle{ (\tau,p) }[/math]. For any sequence [math]\displaystyle{ x_{-j}^{-1} }[/math] with [math]\displaystyle{ j \leq n }[/math], it is possible to denote by [math]\displaystyle{ N_n(x_{-j}^{-1}) }[/math] the number of occurrences of the sequence in the sample, i.e.,

[math]\displaystyle{ N_n(x_{-j}^{-1}) = \sum_{t=0}^{n-j} \mathbf{1}\left\{X_t^{t+j-1} = x_{-j}^{-1}\right\} }[/math]

Rissanen first built a context maximum candidate, given by [math]\displaystyle{ X_{n-K(n)}^{n-1} }[/math], where [math]\displaystyle{ K(n)=C\log{n} }[/math] and [math]\displaystyle{ C }[/math] is an arbitrary positive constant. The intuitive reason for the choice of [math]\displaystyle{ C\log{n} }[/math] comes from the impossibility of estimating the probabilities of sequence with lengths greater than [math]\displaystyle{ \log{n} }[/math] based in a sample of size [math]\displaystyle{ n }[/math].

From there, Rissanen shortens the maximum candidate through successive cutting the branches according to a sequence of tests based in statistical likelihood ratio. In a more formal definition, if bANnxk1b0 define the probability estimator of the transition probability [math]\displaystyle{ p }[/math] by

[math]\displaystyle{ \hat{p}_n(a\mid x_{-k}^{-1}) = \frac{N_n(x_{-k}^{-1}a)}{\sum_{b \in A} N_n (x_{-k}^{-1} b)} }[/math]

where [math]\displaystyle{ x_{-j}^{-1}a=(x_{-j}, \ldots, x_{-1},a) }[/math]. If [math]\displaystyle{ \sum_{b \in A}N_n(x_{-k}^{-1}b) \,=\,0 }[/math], define [math]\displaystyle{ \hat{p}_n(a\mid x_{-k}^{-1}) \,=\, 1/|A| }[/math].

To [math]\displaystyle{ i \geq 1 }[/math], define

[math]\displaystyle{ \Lambda_n (x_{- i }^{-1} ) \,=\, 2 \, \sum_{y \in A} \sum_{a \in A} N_n(y x_{-i}^{-1}a) \log\left[\frac{\hat{p}_n(a\mid x_{-i}^{-1} y)} {\hat{p}_n(a\mid x_{-i}^{-1})} \right]\, }[/math]

where [math]\displaystyle{ y x_{-i}^{-1}=(y,x_{-i}, \ldots , x_{-1}) }[/math] and

[math]\displaystyle{ \hat{p}_n(a\mid x_{-i}^{-1}y)= \frac{N_n(y x_{-i}^{-1}a)}{\sum_{b \in A} N_n (y x_{-i}^{-1}b)}. }[/math]

Note that [math]\displaystyle{ \Lambda_n (x_{- i }^{-1}) }[/math] is the ratio of the log-likelihood to test the consistency of the sample with the probabilistic context tree [math]\displaystyle{ (\tau,p) }[/math] against the alternative that is consistent with [math]\displaystyle{ (\tau',p') }[/math], where [math]\displaystyle{ \tau }[/math] and [math]\displaystyle{ \tau' }[/math] differ only by a set of sibling knots.

The length of the current estimated context is defined by

[math]\displaystyle{ \hat{\ell}_n(X_0^{n-1})= \max \left\{i=1,\ldots, K(n): \Lambda_n (X_{n-i}^{n-1}) \,\gt \, C \log n \right\}\, }[/math]

where [math]\displaystyle{ C }[/math] is any positive constant. At last, by Rissanen,[1] there's the following result. Given [math]\displaystyle{ X_0,\ldots, X_{n-1} }[/math] of a finite probabilistic context tree [math]\displaystyle{ (\tau,p) }[/math], then

[math]\displaystyle{ P\left( \hat{\ell}_n(X_0^{n-1}) \neq \ell(X_0^{n-1}) \right) \longrightarrow 0, }[/math]

when [math]\displaystyle{ n \rightarrow \infty }[/math].

Bayesian information criterion (BIC)

The estimator of the context tree by BIC with a penalty constant [math]\displaystyle{ c\gt 0 }[/math] is defined as

[math]\displaystyle{ \hat{\tau}_\mathrm{BIC}=\underset{\tau \in \mathcal{T}_n}{\arg \max}\{\log L_\tau (X_1^n)-c\,\textrm{d}f (\tau)\log n \} }[/math]

Smallest maximizer criterion (SMC)

The smallest maximizer criterion[3] is calculated by selecting the smallest tree τ of a set of champion trees C such that

[math]\displaystyle{ \lim_{n\to \infty} \frac{\log L_\tau (X^n_1) - \log L_{\hat{\tau}}(X^n_1)}{n} = 0 }[/math]

See also

References

  1. 1.0 1.1 1.2 1.3 Rissanen, J (Sep 1983). "A Universal Data Compression System". IEEE Transactions on Information Theory 29 (5): 656–664. doi:10.1109/TIT.1983.1056741. 
  2. 2.0 2.1 Bejenaro, G (2001). "Variations on probabilistic suffix trees: statistical modeling and prediction of protein families". Bioinformatics 17 (5): 23–43. doi:10.1093/bioinformatics/17.1.23. PMID 11222260. 
  3. 3.0 3.1 "Context tree selection and linguistic rhythm retrieval from written texts". The Annals of Applied Statistics 6 (5): 186–209. 2012. doi:10.1214/11-AOAS511. http://ams.impa.br/mathscinet/search/publdoc.html?pg1=INDI&s1=70920&vfpref=html&r=6&mx-pid=2951534. 
  4. "Using machine-learning methods for musical style modeling". Computer 36 (10): 73–80. 2003. doi:10.1109/MC.2003.1236474. 
  5. "Joint estimation of intersecting context tree models". Scandinavian Journal of Statistics 40 (2): 344–362. 2012. doi:10.1111/j.1467-9469.2012.00814.x. 
  6. "Stochastic chains with memory of variable length". TICSP Series 38: 117–133. 2008. https://hal.archives-ouvertes.fr/hal-00798528/document.