Next: INTRODUCTION
Up: Bayesian Ensemble Learning for
Previous: Contents
EM expectation maximisation
FA factor analysis
GTM generative topographic mapping
ICA independent component analysis
IFA independent factor analysis
MAP maximum a posteriori
MDL minimum description length
ML maximum likelihood
MLP multi-layer perceptron
PCA principal component analysis
RBF radial basis function
SOM self-organising map
ST signal transformation
- artificial neural network:
- A model which consists of simple
building-blocks. The development of such models has been inspired
by neurobiological findings. The building-blocks are termed neurons
in analogy to biological brain.
- auto-associative learning:
- Representation of the observations
is learned by finding a mapping from observations to themselves
through an ``information bottleneck'' which forces the model to
produce a compact coding of the observations. The recognition
model and generative model are learned simultaneously.
- Bayesian probability theory:
- In Bayesian probability theory,
probability is a measure of subjective belief as opposed to
frequentist statistics where probability is interpreted as the
relative frequency of occurrences in an infinite sequence of
trials.
- factor:
- In generative models, the regularities in the
observations are assumed to have been caused by underlying factors,
also termed hidden causes, latent variables or sources.
- factor analysis:
- A technique for finding a generative
model which can represent some of the statistical structure of
the observations. Usually refers to linear factor analysis where
the generative model is linear.
- feature:
- Feature describes a relevant aspect of observations.
The term is often used in connection with recognition models.
Bears resemblance to the term factor which is more common in
connection to generative models.
- ensemble learning:
- A technique for approximating the exact
application of Bayesian probability theory.
- generative model:
- A model which explicitly states how the
observations are assumed to have been generated. See recognition model.
- graphical model:
- A graphical representation of the causal
structure of a probabilistic model. Variables are denoted by
circles and arrows are used for representing the conditional
dependences.
- hidden cause:
- See factor.
- latent variable:
- See factor.
- posterior probability:
- Expresses the beliefs after making an
observation. Sometimes referred to as the posterior.
- prior probability:
- Expresses the beliefs before making an
observation. Sometimes referred to as the prior.
- probability density:
- Any single value of a continuous valued
variable usually has zero probability and only a finite range of
values has a nonzero probability. Probability of a continuous
variable can be characterised by probability density which is
defined to be the probability of a range divided by the size of the
range.
- probability mass:
- In analogy to physical mass and density,
ordinary probability can be called probability mass in order to
distinguish it from probability density.
- recognition model:
- A model which states how features can
be obtained from the observations. See generative model.
- signal transformation approach:
- Finds a recognition model
by optimising a given criterion over the resulting features.
- source:
- See factor.
- supervised learning:
- Aims at building a model which can mimic
the responses of a ``teacher'' who provides two sets of
observations: inputs and the corresponding desired outputs. See
unsupervised learning.
- unsupervise learning:
- The goal in unsupervised learning is to
find an internal representation of the statistical structure of the
observations. See supervised learning.
- volume:
- In analogy to physical mass, density and volume, the
size of range of continuous valued variables can be called volume.
See probability density.
Harri Valpola
2000-10-31