Next: INTRODUCTION
Natural Conjugate Gradient in Variational Inference
Antti Honkela Matti Tornio
Tapani Raiko Juha Karhunen
Adaptive Informatics Research Centre, Helsinki University of Technology
P.O. Box 5400, FI-02015 TKK, Finland
{Antti.Honkela, Matti.Tornio, Tapani.Raiko, Juha.Karhunen}@tkk.fi
http://www.cis.hut.fi/projects/bayes/
Abstract:
Variational methods for approximate inference in machine
learning often adapt a parametric probability
distribution to optimize a given objective
function. This view is especially useful when applying variational
Bayes (VB) to models outside the
conjugate-exponential family. For them, variational EM algorithms are
not easily available, and gradient-based methods are often
used as alternatives. However, regular gradient methods ignore the
Riemannian geometry of the manifold of probability distributions, thus
leading to slow convergence.
We propose using the Riemannian structure of the
approximations and the natural gradient to speed up a conjugate
gradient method for variational learning and inference.
As the form of the approximating distribution is often very simple, the
natural gradient can be used for both model parameters and latent
variables without significant computational overhead.
Experiments in variational Bayesian learning of nonlinear state-space
models for real speech data show more than ten-fold speedups over
alternative learning algorithms.
Tapani Raiko
2007-04-18