Next: Newton iteration for the
Up: Building Blocks for Variational
Previous: APPENDICES
Updating for the Gaussian node
Here we show how to minimise the function
|
(54) |
where ,,, and are scalar constants.
A unique solution exists when and . This problem
occurs when a Gaussian posterior with mean and variance is
fitted to a probability distribution whose logarithm has both a
quadratic and exponential part resulting from Gaussian prior and
log-Gamma likelihoods, respectively, and Kullback-Leibler divergence
is used as the measure of the misfit.
In the special case , the minimum of
can be found
analytically and it is
,
.
In other cases where , minimisation is performed iteratively. At
each iteration, one Newton iteration for the mean and one
fixed-point iteration for the variance are carried out as
explained in more detail in the following.
Subsections
Tapani Raiko
2006-08-28