Recently there has been a lot of interest in Bayesian methods but few applications for unsupervised learning. One of the most important benefits of Bayesian methods is the possibility for model comparison. In supervised learning, cross validation or other methods can be used. For unsupervised learning this is usually not possible, however, since reconstruction error decreases also for the test set as the complexity of the model increases. The ability to optimise the structure of the model is thus particularly valuable in unsupervised learning.
Both ensemble learning, first used in , and independent component analysis using mixture of Gaussians model for sources, first used in , are existing techniques, but they have not been combined previously.
The reader is assumed to have basic knowledge about Bayesian probability theory and ICA.