State-dependent information

From HandWiki
Short description: State-dependent measures that converge to the mutual information

In information theory, state-dependent information is the generic name given to the family of state-dependent measures that in expectation converge to the mutual information.

State-dependent informations often appear in neuroscience applications.

Let X and Y be random variables and y be a state within Y. The state-dependent information between a random variable X and a state Y=y is written as I(X;Y=y). There are currently three known varieties of state-dependent information: specific-surprise, specific-information, and state-specific-information.

Specific-Surprise

The specific-surprise, Iss, is defined by a Kullback–Leibler divergence,

Iss(X;Y=y)DKL[PX|yPX] .

As a special case of the chain-rule for Kullback-Liebler divergerences, specific-surprise follows the chain-rule for variables. Using Z as a random variable, this is specifically,

Iss(XZ;Y=y)=Iss(X;Y=y)+Iss(Z;Y=y|X).

Intuitively, specific-surprise is thought of as “how much did my beliefs about X change upon learning that Y=y”? Which is zero when there’s no change. It is nonnegative. Specific-surprise has also been called “Bayesian Surprise”.

Specific-Information

The specific-information, Isi, is defined by a difference of entropies,

Isi(X;Y=y)H(X)H(X|Y=y) .

Specific-information follows the chain-rule for states. Using a state zZ as a state of random variable Z, this is specifically,

Isi(X;YZ=yz)=Isi(X;Y=y)+Isi(X;Z=z|Y=y) .

Specific-information is interpreted as "how did the uncertainty about X change upon learning Y=y?" This can be in the positive or negative. When X follows a uniform distribution, the Iss and Isi are equivalent.

State-Specific-Information

The state-specific information, Issi, is a synonym for the Pointwise mutual information.

References