The building blocks discussed in this paper can be used for constructing a wide variety of models. An important future line of research will be automated construction of the model. The search through different model structures is facilitated by the ability of ensemble learning to automatically shut down parts of the model. In the experiments reported here, we did not make use of sparsely connected networks but for large models they are likely to prove useful.
The hierarchical nonlinear variance model was shown to be able to learn the structure of the underlying data generating process. In most real cases the generative process can be expected to be more complex, but the same model structure can handle a variety of different cases. In some of the preliminary experiments we have conducted on image data, second-level sources which resemble complex cells [11,12] have emerged. However, so far we have used fully connected mappings which seems to discourage the formation of complex-cell-like sources as each of them typically models the variance of only a small number of the lower-level sources.
Externally the variance neurons appear as any other Gaussian nodes. It is therefore easy to build for instance dynamic models for the variance. These kinds of models can be expected to be useful in many domains. For example volatility in financial markets is known to have temporal auto-correlations.
The scope of this paper was restricted to models with purely local computation. In some cases it may be necessary to use models where a group of simple elements is treated as a single element whose external computations are local but whose internal computations may be more complex. The elements in Figure 3 for instance can be grouped as in [2].