Krichevsky–Trofimov estimator: Difference between revisions
From HandWiki
John Stpola (talk | contribs) (update) |
(No difference)
|
Latest revision as of 17:15, 30 June 2023
In information theory, given an unknown stationary source π with alphabet A and a sample w from π, the Krichevsky–Trofimov (KT) estimator produces an estimate pi(w) of the probability of each symbol i ∈ A. This estimator is optimal in the sense that it minimizes the worst-case regret asymptotically. For a binary alphabet and a string w with m zeroes and n ones, the KT estimator pi(w) is defined as:[1]
- [math]\displaystyle{ \begin{align} p_0(w) &= \frac{m + 1/2}{m + n + 1}, \\[5pt] p_1(w) &= \frac{n + 1/2}{m + n + 1}. \end{align} }[/math]
This corresponds to the posterior mean of a Beta-Bernoulli posterior distribution with prior [math]\displaystyle{ 1/2 }[/math]. For the general case the estimate is made using a Dirichlet-Categorical distribution.
See also
References
- ↑ Krichevsky, R. E.; Trofimov, V. K. (1981). "The Performance of Universal Encoding". IEEE Trans. Inf. Theory IT-27 (2): 199–207. doi:10.1109/TIT.1981.1056331.
![]() | Original source: https://en.wikipedia.org/wiki/Krichevsky–Trofimov estimator.
Read more |