next up previous
Next: Newton iteration for the Up: Building Blocks for Variational Previous: APPENDICES


Updating $ q(s)$ for the Gaussian node

Here we show how to minimise the function

$\displaystyle {\cal C}(m,v) = Mm + V[(m-m_0)^2 + v] + E \exp(m+v/2) - \frac{1}{2} \ln v,$ (54)

where $ M$,$ V$,$ E$, and $ m_0$ are scalar constants. A unique solution exists when $ V > 0$ and $ E \geq 0$. This problem occurs when a Gaussian posterior with mean $ m$ and variance $ v$ is fitted to a probability distribution whose logarithm has both a quadratic and exponential part resulting from Gaussian prior and log-Gamma likelihoods, respectively, and Kullback-Leibler divergence is used as the measure of the misfit.

In the special case $ E = 0$, the minimum of $ {\cal C}(m,v)$ can be found analytically and it is $ m=m_0-\frac{M}{2V}$, $ v=\frac{1}{2V}$. In other cases where $ E>0$, minimisation is performed iteratively. At each iteration, one Newton iteration for the mean $ m$ and one fixed-point iteration for the variance $ v$ are carried out as explained in more detail in the following.


Subsections

Tapani Raiko 2006-08-28