Paper ID: 432 Diversity of efficient coding solutions for a population of noisy linear neurons Eizaburo Doi (edoi@nyu.edu), Liam Paninski (liam@stat.columbia.edu), Eero Simoncelli (eero.simoncelli@nyu.edu)  Efficient coding is a well-known principle for explaining early sensory transformations (Barlow, 1961). But even in the "classical" case of a linear neural population with Gaussian input and output noise, the optimal solution depends heavily on the choice of constraints that are imposed on the problem. These can include constraints on output capacity (which is necessary to prevent the solution from diverging) and number of neurons in the population. With the exception of (Campa et al. 1995), previous literature assumes the number of neurons is equal to the input dimensionality, and furthermore, that the receptive fields are identical (i.e., the population performs a convolution). In addition, previously published examples are based on a single capacity constraint, such as the variance (loosely analogous to average spike rate) of the outputs (Atick & Redlich, 1990, Atick, Li, & Redlich, 1990, van Hateren, 1992), or the norm (analogous to the sum of squared synaptic weights) of filters (Campa et al., 1995). Each of these potential constraints has some approximate mapping onto biologically relevant (i.e., metabolic) costs, implying that a complete formulation should include a generalized cost function that combines power, weights, and population size. Toward this end, we examine a more general formulation of the efficient coding problem. We assume a discrete and finite input signal that is Gaussian with known covariance structure (as in natural signals), corrupted by additive white Gaussian noise. We assume a neural population of arbitrary size with linear receptive fields (RFs), each of whose outputs is corrupted by additive white Gaussian noise. And finally, we assume a cost function that is a linear combination of the number of cells, the L2 norm of the RF weights, and the output power. Given these constraints, we solve for a population of RFs that maximizes the information transmitted about the stimulus. The problem is convex, and thus can be solved with standard optimization methods. We note several important attributes of this formulation. First, it can achieve both over- and under-complete solutions, as is required to explain biological systems. In the retina, for example, previous efforts have assumed a convolutional solution (homogenous population of RFs, one per input cone) (Atick & Redlich, 1990, Atick, Li, & Redlich, 1990, van Hateren, 1992), but the ratio of cones to ganglion cells varies dramatically with eccentricity. Second, the joint cost function allows the theory to automatically select the optimal population size. And finally, even in the case of a convolutional population, the spectral properties of the optimal solution depend critically on the choice of cost function, and can assume shapes ranging from low-pass, to band-pass, to high-pass. For example, although previous literature (Atick & Redlich 1990) obtained low-pass solutions in the case when the input SNR was low (i.e., low contrast stimuli), the inclusion of the penalty for RF weights allows the possibility of low-pass (and highly redundant) solutions even when the input SNR is high.  We conclude that identifying the relative significance of different costs in biological systems is critical to testing the efficient coding hypothesis.