next up previous contents
Next: Evaluation of the algorithm Up: Derivation of the new Previous: Summary of the algorithm

Extensions of the algorithm

In the proposed network, at the first stage the neurons compute projections of the input on their weight vectors, and afterwards only these projections are used to compute the final outputs. Therefore, the network is invariant in orthogonal transformations. If both the input and the weight vectors are rotated with an orthogonal matrix (for which A tex2html_wrap_inline1993 A = I), both the projections and the correlations between weight vectors remain unchanged, and thus also the outputs remain unchanged. This property allows us to reduce the dimensionality of the input with a linear PCA network.

If the input is represented in spherical coordinates (r, tex2html_wrap_inline1997 ), the winning strengths are functions of tex2html_wrap_inline1997 only. Thus the outputs are linear functions of r and continuous nonlinear unimodal functions of tex2html_wrap_inline1997 . It is possible to make the outputs depend nonlinearly from the radius r by writing tex2html_wrap_inline2007 in equation 3.21. This might be desirable if we were to model the responses on the cortex. For example, it has been found that in the primary visual cortex the response of cells to an oriented line is more or less the same whether the line is dim or bright [Sirosh, 1995, page 99,]. Thus, in our terms, the responses of the neurons depend on the winning strengths only and f(p) saturates to an approximately constant value.



Harri Lappalainen
Thu May 9 14:06:29 DST 1996