next up previous contents
Next: Simulations Up: Evaluation of the algorithm Previous: Fixed point iteration of

Computational complexity

Suppose we have a network with N neurons and they all have M inputs. Also suppose that the average size of set of winners is W and we make 10 iterations in the fixed point iteration of tex2html_wrap_inline1737 . For a single input the computation of tex2html_wrap_inline1629 takes roughly NM operations. The computation of tex2html_wrap_inline2173 has to be done for the set of winners only, so it takes order of NW operations. The iteration of tex2html_wrap_inline1737 takes order of tex2html_wrap_inline2179 operations. The computation time of tex2html_wrap_inline1663 from tex2html_wrap_inline1737 is negligible compared the time used in iteration. The adaptation of the weight vectors needs to be done for the set of winners only, and it takes WM operations, which is negligible compared to the term NM. Computation of the correlations tex2html_wrap_inline1835 between the weight vectors takes tex2html_wrap_inline2191 operations, but if the size of batch is much larger than N, this is again negligible.

Summing the amounts of operations gives a total of tex2html_wrap_inline2195 operations. The number of winners depends on the structure of data and generally saturates to a constant value, when the number of neurons grows. It is very rarely greater than the number of inputs. Therefore the first term dominates, and computational complexity is typically of order NM. This means that the computational complexity grows linearly proportionally to the number of neurons as opposed to previous algorithms where the growth has been at least quadratic.



Harri Lappalainen
Thu May 9 14:06:29 DST 1996