next up previous contents
Next: Learning artificial data Up: Simulations Previous: Simulations

The outputs of the network

 

The neurons compute the projection in the direction of their weight vectors. Therefore the output of a neuron before the winning strength assignment is the cosine of the angle between the input and the weight vector multiplied with the norm of the input vector. The outputs are limited to be positive, and so the output of a single neuron looks like a cut sinusoidal after the winning strength assignment. When there are more than one neuron, they compete for the input. Generally it can be said that neurons compete the stronger with each other the closer their weight vectors are. If there are two sets of neurons which have orthogonal weight vectors, their winning strengths are independent, and therefore the system can convey independent information in parallel.

Figure 4.3 shows the shape of y, when the input is 2-dimensional, and there are three evenly spaced weight vectors. A notable property of tex2html_wrap_inline1659 is that they depend only on the direction of the input, not the amplitude. Therefore the y are linearly dependent on the length of the input. The length of input vector has been normalised to unity, but the shape of the outputs would remain unchanged also for different lengths of input. In this test, the parameter tex2html_wrap_inline1747 , which governs the amount of inhibition, has been one. When the weight vectors are orthogonal there is no competition, and the neurons work completely independently. When the weight vectors become more parallel the amount of competition increases, which sharpens the outputs.

   figure855
Figure 4.3: The response for the middle neuron becomes more sharply tuned due to competition when the adjacent neurons come close. X-axis shows the angle between the input and the middle neuron. Places of weight vectors are denoted by `*'. At the top, the angles between neurons are tex2html_wrap_inline1585 and there is no competition. In the middle, the angles are tex2html_wrap_inline2209 and at the bottom tex2html_wrap_inline2211 . Parameter tex2html_wrap_inline2213 .

Figure 4.4 shows the effect of parameter tex2html_wrap_inline1747 . The larger the tex2html_wrap_inline1747 the stronger the inhibition. When tex2html_wrap_inline1747 approaches infinity, the winning strengths become binary. From the figure it can be seen that increasing tex2html_wrap_inline1747 makes the transition between winners sharper.

   figure867
Figure 4.4: The transition between winners becomes sharper as the parameter tex2html_wrap_inline1747 which governs the amount of inhibition increases. At the top tex2html_wrap_inline2225 , at the middle tex2html_wrap_inline2213 , and at the bottom tex2html_wrap_inline2229 . The angle between the neurons was tex2html_wrap_inline2209 .

Figure 4.5 shows the shape of y when the input space is 3-dimensional. Weight vectors are nearly parallel and are organised on a 2-dimensional lattice. The figure shows that the properties demonstrated in two dimensions generalise in higher dimensions. The figure shows again how increasing tex2html_wrap_inline1747 makes the transition between winners sharper.

   figure879
Figure 4.5: Nine nearly parallel weight vectors are organised on a 2-dimensional lattice (angles tex2html_wrap_inline2237 , tex2html_wrap_inline2239 and tex2html_wrap_inline2241 in both x- and y-axis). The plots show the y of the neuron in the middle. At the top, parameter tex2html_wrap_inline2213 . At the bottom tex2html_wrap_inline2229 .

Figure 4.6 shows what happens when there are orthogonal sets of nearly parallel weight vectors. The competition occurs only inside one set and therefore the system has several independent parts, which enables the system to process information in parallel. The competition inside the sets makes the responses sharply tuned and results in a sparse code.

   figure889
Figure: Outputs of a network with 32 neurons. There are four sets (A, B, C and D) of eight (1-8) nearly parallel weight vectors. The weight vectors between two different sets are orthogonal, and thus there are four independently functioning parts in the network. The inputs were superpositions of four orthogonal vectors, one weight vector from each set. The outputs show that there is no cross-talk between orthogonal sets. Inside one set the neurons compete with each other, and since the input is a sum of weight vectors the outputs are binary. (If the input had fell between two weight vectors they would have had graded outputs as in figures 4.3-4.5.)


next up previous contents
Next: Learning artificial data Up: Simulations Previous: Simulations

Harri Lappalainen
Thu May 9 14:06:29 DST 1996