next up previous
Next: Bibliography Up: Nonlinear Independent Component Analysis Previous: Avoiding Problems Originating from

Computational Complexity

Most of the computation in the forward phase is spent in (38). The computation of the gradients of (38) is also where most computation of the backward phase takes place. We have previously tried making the assumption that the outputs of the hidden neurons are independent a posteriori which then obviates the need of equation (38) because (37) can be replaced by an equation similar to (31). Simulations have shown that this assumption is too inaccurate. The computational complexity of this algorithm is proportional to IJKT, where I, J, K and T denote the source dimension, the number of hidden neurons, the number of outputs and the number of observation vectors. In a typical case K > I which means that the computational complexity of the second layer dominates and the computational complexity is higher than in ordinary back-propagation by a factor I.



Harri Lappalainen
2000-03-03