In the proposed network, at the first stage the neurons compute
projections of the input on their weight vectors, and afterwards only
these projections are used to compute the final outputs. Therefore,
the network is invariant in orthogonal transformations. If both the
input and the weight vectors are rotated with an orthogonal matrix
(for which A A = I), both the projections and the correlations
between weight vectors remain unchanged, and thus also the outputs
remain unchanged. This property allows us to reduce the
dimensionality of the input with a linear PCA network.
If the input is represented in spherical coordinates (r,
), the winning strengths are functions of
only. Thus the outputs are linear functions of r and continuous
nonlinear unimodal functions of
. It is possible to make
the outputs depend nonlinearly from the radius r by writing
in equation 3.21. This might be desirable if we
were to model the responses on the cortex. For example, it has been
found that in the primary visual cortex the response of cells to an
oriented line is more or less the same whether the line is dim or
bright [Sirosh, 1995, page 99,]. Thus, in our terms, the responses
of the neurons depend on the winning strengths only and f(p)
saturates to an approximately constant value.