next up previous contents
Next: Extensions of the algorithm Up: Derivation of the new Previous: Learning rule

Summary of the algorithm

We shall now summarise the algorithm. Figure 3.6 schematically shows the structure of the network. The network has a layer of neurons, which compute projections p of the input tex2html_wrap_inline1467 in the direction of their weight vectors tex2html_wrap_inline1591 .

  equation626

The weight vectors are normalised to unity.

   figure631
Figure 3.6: The network has a layer of linear neurons and a mechanism which assings winning strengths tex2html_wrap_inline1659 for each neuron.

The efficiency of the algorithm is based on selecting for each input a set of winners. The selection is based on winning ratios tex2html_wrap_inline1687 which are defined in equation 3.6.

The ratios are combined according to equation 3.7, which has to be solved numerically since the preliminary winning strengths tex2html_wrap_inline1737 appear in both sides of the equation. On the other hand, the computation needs to be done for the set of winners only. The preliminary winning strengths tex2html_wrap_inline1737 have been derived with the following competition mechanism in mind: the stronger is the correlation tex2html_wrap_inline1835 between the weight vectors of neurons i and j the stronger is the competition between them, and if tex2html_wrap_inline1977 there is no mutual competition between neurons i and j.

Equations 3.10, 3.11 and 3.12 are used to compute the final winning strengths tex2html_wrap_inline1663 from the preliminary winning strengths tex2html_wrap_inline1737 . The final outputs tex2html_wrap_inline1605 are obtained by multiplying the projections with the winning strengths.

  equation644

These equations have been chosen so that if we use the following reconstruction mapping for the input

displaymath1955

then the reconstruction error is close to minimum.

The learning rule for the network is a combination of minimising the reconstruction error and using neighbourhoods. It is advisable to use the batch version of the learning rule (equation 3.18), because the computation of correlations between the weight vectors ( tex2html_wrap_inline1705 ) is quite a heavy operation.


next up previous contents
Next: Extensions of the algorithm Up: Derivation of the new Previous: Learning rule

Harri Lappalainen
Thu May 9 14:06:29 DST 1996