How do temporal stimulus correlations influence the performance of population codes? It has long been argued that many key questions in neuroscience can best be posed in information-theoretic terms; the efficient coding hypothesis discussed by Attneave, Barlow, Atick, et al represents perhaps the best-known example. Answering these questions quantitatively requires us to compute the Shannon information rate of neural channels, whether numerically using experimental data or analytically in mathematical models. The non-linearity and non-Gaussianity of neural responses has complicated these calculations, particularly in the case of stimulus distributions with temporal dynamics and nontrivial correlation structure. In this work we extend a method proposed in [1] that allows us to compute the information rate analytically in some cases. In our approach the stimulus is modeled as a temporally correlated stationary process. Analytical results are available in both the high and low signal-to-noise (SNR) regimes: the former corresponds to the case in which a large population of neurons responds strongly to the stimulus, while the latter implies that the available neurons are only weakly tuned to the stimulus properties, or equivalently that the stimulus magnitude is relatively small. In intermediate SNR regimes, good numerical approximations to the information rate are available using efficient forward-backward decoding methods and Laplace approximations [2,3]. We find that when the number of neurons increases unboundedly, the mutual information for temporarily correlated stimuli has a simple form depending only on the Fisher information, paralleling the classical connection between Fisher and Shannon information for static stimuli [4]. For finite-size populations we are able to calculate the performance difference between a decoder based solely on temporally instantaneous observations and one which integrates temporal observations; interestingly, in both the high- and low-SNR regimes, a simple decoder which only includes temporally local neural responses is able to optimally extract information about the stimuli. References: [1] Barbieri, R. et al. (2004). Dynamic analyses of information encoding in neural ensembles. Neural Computation. [2] Paninski et al. (2009). A new look at state-space models for neural data. J. Comput. Neurosci. [3] Pillow et al. (2009). Model-based decoding, information estimation, and change-point detection in multi-neuron spike trains. Under review. [4] Brunel, N. and Nadal, J.-P. (1998). Mutual information, Fisher information, and population coding. Neural Computation.