Maximally informative dimensions

From HandWiki

Maximally informative dimensions is a dimensionality reduction technique used in the statistical analyses of neural responses. Specifically, it is a way of projecting a stimulus onto a low-dimensional subspace so that as much information as possible about the stimulus is preserved in the neural response. It is motivated by the fact that natural stimuli are typically confined by their statistics to a lower-dimensional space than that spanned by white noise[1] but correctly identifying this subspace using traditional techniques is complicated by the correlations that exist within natural images. Within this subspace, stimulus-response functions may be either linear or nonlinear. The idea was originally developed by Tatyana Sharpee, Nicole C. Rust, and William Bialek in 2003.[2]

Mathematical formulation

Neural stimulus-response functions are typically given as the probability of a neuron generating an action potential, or spike, in response to a stimulus [math]\displaystyle{ \mathbf{s} }[/math]. The goal of maximally informative dimensions is to find a small relevant subspace of the much larger stimulus space that accurately captures the salient features of [math]\displaystyle{ \mathbf{s} }[/math]. Let [math]\displaystyle{ D }[/math] denote the dimensionality of the entire stimulus space and [math]\displaystyle{ K }[/math] denote the dimensionality of the relevant subspace, such that [math]\displaystyle{ K \ll D }[/math]. We let [math]\displaystyle{ \{\mathbf{v}^K\} }[/math] denote the basis of the relevant subspace, and [math]\displaystyle{ \mathbf{s}^K }[/math] the projection of [math]\displaystyle{ \mathbf{s} }[/math] onto [math]\displaystyle{ \{ \mathbf{v}^K \} }[/math]. Using Bayes' theorem we can write out the probability of a spike given a stimulus:

[math]\displaystyle{ P(spike|\mathbf{s}^K) = P(spike)f(\mathbf{s}^K) }[/math]

where

[math]\displaystyle{ f(\mathbf{s}^K) = \frac{P(\mathbf{s}^K|spike)}{P(\mathbf{s}^K)} }[/math]

is some nonlinear function of the projected stimulus.

In order to choose the optimal [math]\displaystyle{ \{ \mathbf{v}^K \} }[/math], we compare the prior stimulus distribution [math]\displaystyle{ P(\mathbf{s}) }[/math] with the spike-triggered stimulus distribution [math]\displaystyle{ P(\mathbf{s}|spike) }[/math] using the Shannon information. The average information (averaged across all presented stimuli) per spike is given by

[math]\displaystyle{ I_{spike} = \sum_{\mathbf{s}} P(\mathbf{s}|spike) log_2 [P(\mathbf{s}|spike)/P(\mathbf{s})] }[/math].[3]

Now consider a [math]\displaystyle{ K = 1 }[/math] dimensional subspace defined by a single direction [math]\displaystyle{ \mathbf{v} }[/math]. The average information conveyed by a single spike about the projection [math]\displaystyle{ x = \mathbf{s} \cdot \mathbf{v} }[/math] is

[math]\displaystyle{ I(\mathbf{v}) = \int dx P_{\mathbf{v}}(x|spike)log2[P_{\mathbf{v}}(x|spike)/P_{\mathbf{v}}(x)] }[/math],

where the probability distributions are approximated by a measured data set via [math]\displaystyle{ P_{\mathbf{v}}(x|spike) = \langle \delta(x - \mathbf{s} \cdot \mathbf{v}) |spike \rangle_{\mathbf{s}} }[/math] and [math]\displaystyle{ P_{\mathbf{v}}(x) = \langle \delta(x - \mathbf{s} \cdot \mathbf{v})\rangle_{\mathbf{s}} }[/math], i.e., each presented stimulus is represented by a scaled Dirac delta function and the probability distributions are created by averaging over all spike-eliciting stimuli, in the former case, or the entire presented stimulus set, in the latter case. For a given dataset, the average information is a function only of the direction [math]\displaystyle{ \mathbf{v} }[/math]. Under this formulation, the relevant subspace of dimension [math]\displaystyle{ K = 1 }[/math] would be defined by the direction [math]\displaystyle{ \mathbf{v} }[/math] that maximizes the average information [math]\displaystyle{ I(\mathbf{v}) }[/math].

This procedure can readily be extended to a relevant subspace of dimension [math]\displaystyle{ K \gt 1 }[/math] by defining

[math]\displaystyle{ P_{\mathbf{v}^K}(\mathbf{x}|spike) = \langle \prod_{i=1}^K \delta(x_i - \mathbf{s} \cdot \mathbf{v}_i) |spike \rangle_{\mathbf{s}} }[/math]

and

[math]\displaystyle{ P_{\mathbf{v}^K}(\mathbf{x}) = \langle \prod_{i=1}^K \delta(x_i - \mathbf{s} \cdot \mathbf{v}_i) \rangle_{\mathbf{s}} }[/math]

and maximizing [math]\displaystyle{ I({\mathbf{v}^K}) }[/math].

Importance

Maximally informative dimensions does not make any assumptions about the Gaussianity of the stimulus set, which is important, because naturalistic stimuli tend to have non-Gaussian statistics. In this way the technique is more robust than other dimensionality reduction techniques such as spike-triggered covariance analyses.

References

  1. D.J. Field. "Relations between the statistics of natural images and the response properties of cortical cells." J. Opt. Soc. am. A 4:2479-2394, 1987.
  2. Sharpee, Tatyana, Nicole C. Rust, and William Bialek. Maximally informative dimensions: analyzing neural responses to natural signals. Advances in Neural Information Processing Systems (2003): 277-284.
  3. N. Brenner, S. P. Strong, R. Koberle, W. Bialek, and R. R. de Ruyter van Steveninck. "Synergy in a neural code. Neural Comp., 12:1531-1552, 2000.