next up previous contents
Next: Parameter learning Up: Tasks Previous: Tasks   Contents


Inference

Inference is the task of computing the posterior probability over the latent variables $ \boldsymbol{S}$ given a fixed set of parameters $ \boldsymbol{\theta}$, the data $ \boldsymbol{X}$ and the model structure $ \mathcal{H}$, according to the Bayes rule (Equation 2.1). The distribution is often very high dimensional and for all practical purposes it is represented as marginal distributions (see Eq. 2.2) over groups of variables. The computations are not straightforward and therefore one needs to use algorithms such as belief propagation, described in Section 3.1.1.

One of the advantages of graphical models is that handling of missing values in data is straightforward and consistent. Instead of belonging to data $ \boldsymbol{X}$, missing values belong to latent variables $ \boldsymbol{S}$ and their reconstructions (or posterior distributions) are inferred as any other latent variables. Reconstruction of missing values in linear and nonlinear models is studied in Section 4.1.4 and Publication II.

Exact inference by belief propagation has exponential computational complexity with respect to the size of the largest clique in the Markov network (see Figure 3.1), so often one needs to settle for approximated inference. In some extensions such as nonlinear state-space models described in Section 4.3, there is no analytical solution at all. Different kinds of approximate methods are described in Section 2.5.


next up previous contents
Next: Parameter learning Up: Tasks Previous: Tasks   Contents
Tapani Raiko 2006-11-21