Next:
LIST OF ABBREVIATIONS AND
Up:
Bayesian Ensemble Learning for
Previous:
PREFACE
Contents
Abstract
Preface
Contents
List of abbreviations and glossary of terms
Introduction
Contributions and structure of the thesis
Publications of the thesis
Original papers
Contents of the publications and author's contributions
Bayesian probability theory
Propositions
Elementary rules of Bayesian probability theory
Bayes' rule
Marginalisation principle
Decision theory
Summary: learning, reasoning and action
Bayesian learning in practice
Probability density for real valued variables
Methods for approximating the posterior probability
Point estimates
EM algorithm.
Stochastic sampling
Parametric approximations
Information-theoretic approaches to learning
Coding and complexity
Minimum message length inference
Ensemble learning
Cost function
Bits-back argument
Specification of the model and priors
Noise models
Causal structure of the model
Supervised vs. unsupervised learning
Priors
Hierarchical model instead of prior.
Uninformative priors.
Linear factor analysis and its extensions
Linear Gaussian factor analysis model
Neural network interpretation of the model
Algorithms
Non-Gaussian factors
Algorithms
Nonlinear mapping
Multi-layer perceptron networks
Algorithms
Self-organising map
Auto-associative models.
Dynamic factors
Algorithms
Bayesian nonlinear factor analysis
Model
Why simple methods fail
Approximation of the posterior
Automatic pruning
Results
Discussion
Biological relevance
Cerebral cortex and generative models
Cortex and nonlinear factor analysis
Structural development
Future trends
References
Publication I
Publication II
Publication III
Publication IV
Publication V
Publication VI
Publication VII
Publication VIII
Harri Valpola
2000-10-31