European Institute for Statistics, Probability, Stochastic Operations Research
and their Applications

About | Research | Events | People | Reports | Alumni | ContactHome

Pseudo-Marginal Monte Carlo for the Bayesian Gaussian Process Latent Variable Model
Charles Gadd, University of Warwick

Gaussian process latent variable models (GPLVMs) can be viewed as a non-linear extension to the dual of probabilistic principal component analysis, where in the dual we instead optimize the latent variables and marginalize the transformation matrix. In recent years these models have emerged as a powerful tool for modelling multi- dimensional data. One variant is the Bayesian GPLVM (BGPLVM) which allows for the additional marginalisation of latent variables using variational Bayes and variational sparse GP regression. We focus on the a further generalization, the dynamic BGPLVM for supervised learning, which incorporates general input information through a GP prior. In GP models we choose to parameterize our kernels with a set of hyperparameters to allow for a degree of flexibility. Having marginalized over the latent space it is common to optimise the variational parameters and hyperparameters simultaneously through maximum likelihood. However, a fully Bayesian model would both infer all parameters and latent variables, plus integrate over them with respect to their posterior distributions to account for their uncertainty when making predictions. Unfortunately it is not possible to obtain these analytically. We may choose to perform this inference using stochastic approximations based on MCMC, but find that the strong coupling between the latent variables and hyperparameters a posteriori provides a challenge when sampling and results in poor mixing. To break this correlation when sampling we propose the use of Pseudo- Marginal Monte Carlo, approximately integrating out the latent variables while retaining the exact posterior distribution over hyper-parameters as the invariant distribution of our Markov Chain and ergodicity properties. This works shows the ability of a fully Bayesian treatment to better quantify uncertainty when compared to the maximum likelihood or other optimization based approaches (joint work with Sara Wade, and Akeel Shah).

Home | Recent Changes | To protected page

Last change: Wed Sep-06-17 15:00:32
Eurandom 2012