Autocorrelation (words)
In combinatorics, a branch of mathematics, the autocorrelation of a word is the set of periods of this word. More precisely, it is a sequence of values which indicate how much the end of a word looks likes the beginning of a word. This value can be used to compute, for example, the average value of the first occurrence of this word in a random string.
Definition
In this article, A is an alphabet, and [math]\displaystyle{ w=w_1\dots w_n }[/math] a word on A of length n. The autocorrelation of [math]\displaystyle{ w }[/math] can be defined as the correlation of [math]\displaystyle{ w }[/math] with itself. However, we redefine this notion below.
Autocorrelation vector
The autocorrelation vector of [math]\displaystyle{ w }[/math] is [math]\displaystyle{ c=(c_0,\dots,c_{n-1}) }[/math], with [math]\displaystyle{ c_i }[/math] being 1 if the prefix of length [math]\displaystyle{ n-i }[/math] equals the suffix of length [math]\displaystyle{ n-i }[/math], and with [math]\displaystyle{ c_i }[/math] being 0 otherwise. That is [math]\displaystyle{ c_i }[/math] indicates whether [math]\displaystyle{ w_{i+1}\dots w_n=w_1\dots w_{n-i} }[/math].
For example, the autocorrelation vector of [math]\displaystyle{ aaa }[/math] is [math]\displaystyle{ (1,1,1) }[/math] since, clearly, for [math]\displaystyle{ i }[/math] being 0, 1 or 2, the prefix of length [math]\displaystyle{ n-i }[/math] is equal to the suffix of length [math]\displaystyle{ n-i }[/math]. The autocorrelation vector of [math]\displaystyle{ abb }[/math] is [math]\displaystyle{ (1,0,0) }[/math] since no strict prefix is equal to a strict suffix. Finally, the autocorrelation vector of [math]\displaystyle{ aabbaa }[/math] is 100011, as shown in the following table:
a | a | b | b | a | a | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|
a | a | b | b | a | a | 1 | |||||
a | a | b | b | a | a | 0 | |||||
a | a | b | b | a | a | 0 | |||||
a | a | b | b | a | a | 0 | |||||
a | a | b | b | a | a | 1 | |||||
a | a | b | b | a | a | 1 |
Note that [math]\displaystyle{ c_0 }[/math] is always equal to 1, since the prefix and the suffix of length [math]\displaystyle{ n }[/math] are both equal to the word [math]\displaystyle{ w }[/math]. Similarly, [math]\displaystyle{ c_{n-1} }[/math] is 1 if and only if the first and the last letters are the same.
Autocorrelation polynomial
The autocorrelation polynomial of [math]\displaystyle{ w }[/math] is defined as [math]\displaystyle{ c(z)=c_0z^0+\dots+c_{n-1}z^{n-1} }[/math]. It is a polynomial of degree at most [math]\displaystyle{ n-1 }[/math].
For example, the autocorrelation polynomial of [math]\displaystyle{ aaa }[/math] is [math]\displaystyle{ 1+z+z^2 }[/math] and the autocorrelation polynomial of [math]\displaystyle{ abb }[/math] is [math]\displaystyle{ 1 }[/math]. Finally, the autocorrelation polynomial of [math]\displaystyle{ aabbaa }[/math] is [math]\displaystyle{ 1+z^4+z^5 }[/math].
Property
We now indicate some properties which can be computed using the autocorrelation polynomial.
First occurrence of a word in a random string
Suppose that you choose an infinite sequence [math]\displaystyle{ s }[/math] of letters of [math]\displaystyle{ A }[/math], randomly, each letter with probability [math]\displaystyle{ \frac{1}{|A|} }[/math], where [math]\displaystyle{ |A| }[/math] is the number of letters of [math]\displaystyle{ A }[/math]. Let us call [math]\displaystyle{ E }[/math] the expectation of the first occurrence of ?[math]\displaystyle{ m }[/math] in [math]\displaystyle{ s }[/math]? . Then [math]\displaystyle{ E }[/math] equals [math]\displaystyle{ |A|^nc\left(\frac{1}{|A|}\right) }[/math]. That is, each subword [math]\displaystyle{ v }[/math] of [math]\displaystyle{ w }[/math] which is both a prefix and a suffix causes the average value of the first occurrence of [math]\displaystyle{ w }[/math] to occur [math]\displaystyle{ |A|^{|v|} }[/math] letters later. Here [math]\displaystyle{ |v| }[/math] is the length of v.
For example, over the binary alphabet [math]\displaystyle{ A=\{a,b\} }[/math], the first occurrence of [math]\displaystyle{ aa }[/math] is at position [math]\displaystyle{ 2^2(1+\frac 12)=6 }[/math] while the average first occurrence of [math]\displaystyle{ ab }[/math] is at position [math]\displaystyle{ 2^2(1)=4 }[/math]. Intuitively, the fact that the first occurrence of [math]\displaystyle{ aa }[/math] is later than the first occurrence of [math]\displaystyle{ ab }[/math] can be explained in two ways:
- We can consider, for each position [math]\displaystyle{ p }[/math], what are the requirement for [math]\displaystyle{ w }[/math]'s first occurrence to be at [math]\displaystyle{ p }[/math].
- The first occurrence of [math]\displaystyle{ ab }[/math] can be at position 1 in only one way in both case. If [math]\displaystyle{ s }[/math] starts with [math]\displaystyle{ w }[/math]. This has probability [math]\displaystyle{ \frac14 }[/math] for both considered values of [math]\displaystyle{ w }[/math].
- The first occurrence of [math]\displaystyle{ ab }[/math] is at position 2 if the prefix of [math]\displaystyle{ s }[/math] of length 3 is [math]\displaystyle{ aab }[/math] or is [math]\displaystyle{ bab }[/math]. However, the first occurrence of [math]\displaystyle{ aa }[/math] is at position 2 if and only if the prefix of [math]\displaystyle{ s }[/math] of length 3 is [math]\displaystyle{ baa }[/math]. (Note that the first occurrence of [math]\displaystyle{ aa }[/math] in [math]\displaystyle{ aaa }[/math] is at position 1.).
- In general, the number of prefixes of length [math]\displaystyle{ n+1 }[/math] such that the first occurrence of [math]\displaystyle{ aa }[/math] is at position [math]\displaystyle{ n }[/math] is smaller for [math]\displaystyle{ aa }[/math] than for [math]\displaystyle{ ba }[/math]. This explain why, on average, the first [math]\displaystyle{ aa }[/math] arrive later than the first [math]\displaystyle{ ab }[/math].
- We can also consider the fact that the average number of occurrences of [math]\displaystyle{ w }[/math] in a random string of length [math]\displaystyle{ l }[/math] is [math]\displaystyle{ |A|^{l-n} }[/math]. This number is independent of the autocorrelation polynomial. An occurrence of [math]\displaystyle{ w }[/math] may overlap another occurrence in different ways. More precisely, each 1 in its autocorrelation vector correspond to a way for occurrence to overlap. Since many occurrences of [math]\displaystyle{ w }[/math] can be packed together, using overlapping, but the average number of occurrences does not change, it follows that the distance between two non-overlapping occurrences is greater when the autocorrelation vector contains many 1's.
Ordinary generating functions
Autocorrelation polynomials allows to give simple equations for the ordinary generating functions (OGF) of many natural questions.
- The OGF of the languages of words not containing [math]\displaystyle{ w }[/math] is [math]\displaystyle{ \frac{c(z)}{z^n+(1-|A|z)c(z)} }[/math].
- The OGF of the languages of words containing [math]\displaystyle{ w }[/math] is [math]\displaystyle{ \frac{z^n}{(1-|A|z)(z^n+(1-|A|z)c(z))} }[/math].
- The OGF of the languages of words containing a single occurrence of [math]\displaystyle{ w }[/math], at the end of the word is [math]\displaystyle{ \frac{z^n}{z^{n}+(1-|A|z)c(z)} }[/math].
References
- Flajolet and Sedgewick (2010). Analytic Combinatorics. New York: Cambridge University Press. pp. 60-61. ISBN 978-0-521-89806-5.
- Rosen, Ned. "Expected waiting times for strings of coin flips". https://www2.bc.edu/ned-rosen/public/CoinFlips.pdf.
- Odlyzko, A. M.; Guibas, L. J. (1981). "String overlaps, pattern matching, and nontransitive games". Journal of Combinatorial Theory Series A 30 (2): 183–208. doi:10.1016/0097-3165(81)90005-4.
Original source: https://en.wikipedia.org/wiki/Autocorrelation (words).
Read more |