next up previous
Next: Bayesian learning Up: Multi-Layer Perceptrons as Previous: Multi-Layer Perceptrons as

Introduction

Many types of unsupervised learning can be viewed as generative learning where the goal is to find a model which explains how the observations were generated. The hypothesis is that there are latent variables which have generated the observations by an unknown mapping. The goal of the learning is to identify the latent variables and the unknown mapping.

The success of the model depends on how well it can capture the structure of the phenomena underlying the observations. Sometimes the process is well characterised by assuming a discrete latent variable which produces different observations at different states. Then the generative model used in vector quantisation is appropriate. If there is reason to assume that several independent latent variables have generated the observations via a linear mapping, then the model used in independent component analysis suits the problem well.

In many realistic cases it is reasonable to assume that there are several latent variables which affect the observations nonlinearly. One example could be the effect of pressure and temperature on the properties of the end product of a chemical process. Although many effects in real world are locally linear, the overall effects are almost always nonlinear. Also, there are usually several factors whose nature and effect on the observations are completely unknown and whose direct measurement is impossible for practical reasons.

The goal of this work is to develop methods for inferring the hidden causes, the latent variables, from the observations alone. The nonlinear mapping from the unknown latent variables to the observations is modelled with the familiar multi-layer perceptron network (MLP).


next up previous
Next: Bayesian learning Up: Multi-Layer Perceptrons as Previous: Multi-Layer Perceptrons as
Harri Lappalainen
1999-05-25