Computing the information rate in state-space models
In preparation; presented at the COSYNE meeting
It has long been argued that many key questions in neuroscience can
best be posed in information-theoretic terms; the efficient coding
hypothesis discussed by Attneave, Barlow, Atick, et al represents
perhaps the best-known example. Answering these questions
quantitatively requires us to compute the Shannon information rate of
neural channels, whether numerically using experimental data or
analytically in mathematical models. The non-linearity and
non-Gaussianity of neural responses has complicated these
calculations, particularly in the case of stimulus distributions with
temporal dynamics and nontrivial correlation structure.
In this work we discuss methods that allow us to compute the
information rate analytically in some cases. In our approach the
stimulus is modeled as a temporally correlated stationary process.
Analytical results are available in both the high and low
signal-to-noise (SNR) regimes: the former corresponds to the case in
which a large population of neurons responds strongly to the stimulus,
while the latter implies that the available neurons are only weakly
tuned to the stimulus properties, or equivalently that the stimulus
magnitude is relatively small.
The intermediate SNR regime, in which observations from many
weakly-tuned neurons are available, is perhaps of most
neurophysiological relevance; here we may employ a certain Gaussian
limit (distinct from the usual Fisher information limit used to
compute the high-SNR limit) to again obtain the information rate
analytically. This Gaussian limit has the form of a simple Kalman
filter model, and sheds light on the approximate sufficient statistic
in this problem; analysis of this approximate sufficient statistic, in
turn, leads to dramatic improvements in the computation time necessary
to explore these information-theoretic computations numerically.
Liam Paninski's home