Log-concavity results on Gaussian
process methods for supervised and unsupervised
learning
Neural Information Processing Systems 2004
Log-concavity is an important property in the context of optimization,
Laplace approximation, and sampling; Gaussian process methods have
become quite popular recently for classification, regression, density
estimation, and point process intensity estimation. Here we prove
that the predictive densities corresponding to each of these
applications are log-concave, given any observed data. We also prove
that the likelihood is log-concave in the hyperparameters controlling
the mean function of the Gaussian prior in the density and point
process intensity estimation cases, and the mean, covariance, and
observation noise parameters in the classification and regression
cases; the proof leads to a useful parameterization of these
hyperparameters, indicating a suitably large class of priors for which
the corresponding maximum {\it a posteriori} problem is log-concave.
Finally, we discuss a modification of the Gaussian process idea which
leads to the log-concavity property in somewhat more generality for
the density and point process estimation cases.
Reprint (90K, pdf) | Related work on integrate-and-fire neurons | Liam
Paninski's home