The phases of the learning procedure are explained in
Chapter . The initialisation of
was
done with FastICA algorithm [28]. The
initialisation procedure is quite similar to the one with VQ. The
results with ICA are presented here, since they were somewhat better
than the ones using VQ. Future work should include a more careful
comparison of different initialisation methods. As ICA is symmetric
with respect to positive and negative values, the mixing matrix was
doubled to include the negative version as can be seen in
Figure
. The sources were updated for 100 sweeps. Then
the reconstruction error was fed to ICA again, now including also the
variance sources as described in Subsection
. It
results in some more neurons on the second layer which can be seen in
Figure
.
![]() |
![]() |
![]() |
The sources were updated for one hundred sweeps and the least useful
ones were removed. Now the second layer had 210 neurons. The sources
were updated until sweep 500 when the sources
s2 and
variance sources
u2 of the second layer were fed to ICA once
again to get initial values for
A2 and
B2 in
Figure . The new sources on the third layer were
updated for 200 sweeps and during that the second layer sources were
updated every fifth sweep.
The sources on the second layer are ordered for visualisation purposes based on the connections from the third layer. Each dimension of the means of the weights A2 and B2 are scaled to zero mean and unit variance and fed to the self-organising map (SOM) [39]. The patches are then organised close to their best matching unit in the SOM.
![]() |
Next, also the weights were released to be updated. The second layer
was ``kept alive'' for 1500 sweeps. Figure
shows the situation at sweep 1000. The algorithm has simplified the
model by killing neurons. The ``dead'' neurons are removed and everything
is updated without using the special states until finally at sweep
6000 the final results are shown in Figure
.
![]() |