Inference is the task of computing the posterior probability over the
latent variables
given a fixed set of parameters
, the data
and the model structure
,
according to the Bayes rule (Equation 2.1). The
distribution is often very high dimensional and for all practical
purposes it is represented as marginal distributions (see
Eq. 2.2) over groups of variables. The computations are
not straightforward and therefore one needs to use algorithms such as
belief propagation, described in Section 3.1.1.
One of the advantages of graphical models is that handling of missing
values in data is straightforward and consistent. Instead of belonging
to data
, missing values belong to latent variables
and
their reconstructions (or posterior distributions) are inferred as any
other latent variables. Reconstruction of missing values in linear and nonlinear models is studied in
Section 4.1.4 and Publication II.
Exact inference by belief propagation has exponential computational complexity with respect to the size of the largest clique in the Markov network (see Figure 3.1), so often one needs to settle for approximated inference. In some extensions such as nonlinear state-space models described in Section 4.3, there is no analytical solution at all. Different kinds of approximate methods are described in Section 2.5.