We shall now return back to the reconstruction mapping. We derived the winning ratios by starting with two neurons and the reconstruction mapping in equation 3.3. The optimal outputs were then defined by equation 2.1. The relation between the winning strength and the output y was defined in equation 3.5 to be . The reconstruction mapping was thus
There are no guarantees, however, that we could use the defined in equation 3.7 in place of . There is no reason why the reconstruction defined in equation 3.8 would be even close to the optimal. It turns out, however, that we are already quite close and can get even closer by making a few corrections to . The corrections are such that they approximately preserve the relations between the winning strengths, but correct their overall magnitudes so that we get an agreement with the reconstruction equation 3.8.
Again it should be emphasised that we want to get an agreement with the reconstruction mapping only in order to be able to derive the learning rule and to make sure that the outputs contain information about the inputs. For these purposes even an approximation will suffice.
We would now like to find a mapping from to . It should be such that equation 3.8 yields a fairly good reconstruction. We shall make two corrections to the preliminary winning strengths . We obtain the first one by considering n neurons, which have nearly parallel weight vectors and which are equally far from the input , that is, and for all i, j. Then it follows from the definition of the winning ratios in equation 3.6 that . By symmetry, the solution to equation 3.7 has to be . From these initial conditions we get
On the other hand, we know that for nearly parallel weight vectors the sum of should be near to one in order to minimise the reconstruction error in equation 2.1 if we are using the reconstruction mapping in equation 3.8. Thus the optimal solution is . Solving 1/n from equation 3.9 gives the first correction to .
After the first correction the winning strengths are . The correction function defined in equation 3.10 is monotonically increasing and it will therefore leave the order of the outputs unchanged although the relations may change.
We get the second correction when we notice that if the were optimal in terms of equation 3.8, then in the following equation
should equal . If it means that should be multiplied by to make the equation 3.8 hold. However, for those neurons j that most contribute to the correlation is biggest, which means that for them . We shall therefore make the correction by to the neuron i only.
The equation 3.12 will now give the final corrected winning strengths . The last corrections to the magnitudes of the winning strengths are not globally uniform, because the ratios may vary in different directions. The corrections are locally uniform, however. For neurons with similar weight vectors the ratios are also similar, and therefore the last correction preserves the local relations between winning strengths.
If equations 3.11 and 3.12 were iterated, the solution would approach the optimal solution of equation 3.8. We don't want to do this, however, since the simple linear reconstruction mapping does not promote sparsity in any way. We have obtained a sparse coding by using equation 3.7 and we don't want to lose it. Even a single application of equation 3.12 suffices to bring the outputs to a satisfactory agreement with equation 3.8.