Next: Contents of the publications
Up: Introduction
Previous: Background
Contents
Graphical models provide a good framework for machine learning and
artificial intelligence. Graphical models are going to be extended,
based on approximate Bayesian inference, into a system that can plan,
infer, and interact with its environment using both discrete and
continuous variables as well as structured representations. The
concrete steps that have been taken in this work can be summarised as
follows:
- A novel framework where Bayesian networks may include nonlinear
dependencies and algorithms for variational Bayesian learning are
automatically derived.
- An extension of hidden Markov models to deal with sequences of
structured symbols rather than characters, with four
relevant algorithms and an applications in the domain of bioinformatics.
- The first graphical model that can handle both nonlinear dependencies
and relations.
- An extension of a method for learning nonlinear state-space models to
control.
- A novel algorithm for inference in nonlinear state-space models that
is both efficient and reliable.
- A study of methods for handling corrupted or inaccurate values in
data.
- A study of some latent variable models based on their capability of
reconstructing missing values in data.
Next: Contents of the publications
Up: Introduction
Previous: Background
Contents
Tapani Raiko
2006-11-21