Most of the experimental results on nonlinear factor analysis are summarised in publication V. Reference [128] presents many of the same experiments and in addition an experiment which shows how factors can be estimated for new observations which have not been present during learning. Publication VIII reports experiments with the dynamic extension of the nonlinear factor analysis algorithm.
In general, the experiments have verified that ensemble learning can be successfully applied to nonlinear factor analysis using MLP networks. Ensemble learning avoids the problems related to overfitting which is a severe problem for simpler algorithms. It is also easy to optimise the structure of the model simply by minimising the cost function. The number of factors and hidden neurons of the MLP network, for instance, can be reliably optimised.
It is well known that MLP networks have local minima (see, e.g., [39,8]). This seems to be a nearly unavoidable consequence of using complex models with rich representational capacities. It is recommendable to try several different random initialisations of the network and choose the result which minimised the cost function.
The experiments with artificial data show that if the structure of the observations matches the model, then the algorithms developed in this thesis are able to reveal the original, independent factors. In many realistic cases there is reason to believe that the underlying structure of the observations is more accurately described as a nonlinear than linear mapping from underlying factors to observations. Experiments with measurements from an industrial pulp process reported in publication V have verified this at least for that data set. With a nonlinear model, far fewer factors are needed to represent the data than with a linear model.