Bayesian decoding of birdsong from auditory midbrain neurons Alexandro D Ramirez, Joseph W. Schumacher, Ana Calabrese, D. Schneider, Yashar Ahmadian, Sarah Woolley, Liam Paninski Zebra finch auditory midbrain neurons have receptive fields that are tuned to specific spectrotemporal patterns of birdsong, an acoustically complex signal used for songbird social communication [1], [3]. To better understand the neural coding that leads to auditory perception, we ask how well birdsong can be decoded from the spike trains produced by auditory neurons. We ask if these cells can be used to reconstruct stimulus features relevant for song discrimination and identification. Neurons in multiple zebra finch brain areas respond to songs, suggesting that spike trains from these areas can contribute additional prior information for song detection. It is therefore also interesting to examine, in general, how ’prior’ information can contribute to stimulus estimation. We address these questions within a Bayesian framework (for a detailed discussion see [2]), calculating maximum a posteriori (MAP) estimates of song spectrograms given single and multiple spike trains from zebra finch auditory midbrain neurons. The distribution of spike responses given stimuli is modeled with a generalized linear model (GLM). The linear stimulus filtering properties of the model are incorporated via a spectrotemporal receptive field (STRF) and history-dependent spike effects are incorporated via a spike-history filter. Both of these parameters are estimated from auditory midbrain spiking data using penalized maximum likelihood of the GLM. Using this encoding model, we study whether these spike trains carry enough information for faithful stimulus reconstructions under a weak, Gaussian white-noise prior. We show how prior information of spectrotemporal song components can improve song estimation by comparing our weak prior results with MAP estimates under a correlated Gaussian prior to capture local spectrotemporal correlations. We use a separable covariance matrix and determine spectral correlations empirically using forty seconds of song recorded from twenty birds. Temporal correlations are found by fitting an Autoregressive (AR) model to this data set. Using an AR model allows us to efficiently determine MAP estimates in linear time while capturing the local correlation structure. Because of the concavity properties of a our likelihood and prior, our posterior has a unique maxima [2]). Our results show how prior spectrotemporal information can greatly aid song estimation when decoding with low numbers of auditory midbrain neurons. We are currently examining the distinguishability of spikes trains elicited by different song syllables using a recently devised spike distance metric that is based on bayesian estimation. This analysis will allow us to examine which patterns of spikes are important for identifying different song syllables. References [1] Woolley SM Gill PR Fremouw T Theunissen FE. Functional groups in the avian auditory system. Journal of Neuroscience, 29(9):2780–2793, 2009. [2] Pillow J. Ahmadian Y Paninski L. Model-based decoding, information estimation, and change-point detection in multi-neuron spike trains. (Under review) Neural Computation, 2009. [3] SMN Woolley, TE Fremouw, A Hsu, and FE Theunissen. Tuning for spectro-temporal modulations as a mechanism for auditory discrimination of natural sounds. NATURE NEUROSCIENCE, 8(10):1371–1379, OCT 2005.