next up previous contents
Next: References Up: Master's Thesis Previous: Learning natural data

Discussion

Sparse coding is a biologically motivated way to represent information. It combines advantages of local and dense coding while avoiding most of their drawbacks. In this work a computationally efficient algorithm for finding sparse codes has been developed. It is able mimic some aspects of the sensory processing in the brain, which emerge from the competition between neurons, although the exact mechanisms underlying the competition are apparently different.

Simulations with artificial and natural data show that the algorithm is able to find a meaningful representation to its inputs. It can be argued that the algorithm is able to find objects, because the similarities between our intuitive view about objects and the representations, which the algorithm has found, are quite striking. Objects (and also features, concepts or anything that deserves a name) can be described as collections of highly correlated properties [Barlow, 1985]. For instance, the properties `furry', `has tail', `moves', `animal', `barks', etc. are highly correlated, that is, the combination of these properties is much more frequent than it would be if they were independent. Therefore the collection of these properties deserves a name. It seems that the algorithm developed in this work is able to learn to group this kind of correlated features, and moreover is able to do so even when there are always several of these objects present in the data simultaneously.

Unlike in the case of linear network, it may be useful to consider a hierarchical arrangement of several layers of sparse coding networks. Each layer can group features from the preceding layers into more abstract features, which can in turn be used in the following layers. This way the system could gradually find more and more abstract features to represent the inputs. This kind of ability to make abstractions is considered to be one of the fundamental properties of human intelligence.

It is very tempting to think that the similarities between the representations learned by the algorithm and our notion about objects is due to the underlying similarities in the information processing principles between the algorithm and the brain. Even if this was not true, the algorithm is well suited for automatically extracting features in engineering applications. This is due to the many advantageous properties of sparse codes and the efficiency of the algorithm proposed in this work. The computational complexity of the algorithm is of order NM, where N is the number of neurons and M is the number of inputs, as opposed to previous algorithms, where the complexity has typically been quadratically dependent on the number of neurons. Maybe the most prominent applications for the algorithm would be in preprocessing. The simulations with digitised samples of handwritten text show that the algorithm is able to find features even if the data has not been preprocessed in any way.

A very important future line of study would be the extension of the sparse coding algorithm into time domain. Currently the algorithm can group features that occur simultaneously, but it might be possible to group features that occur close in time. The problem to be solved would be the implementation of the competition mechanism: each neuron should be able to compete with neurons that have been previously active in addition to other neurons that are simultaneously active, because previous activities might convey information about events that are still going on. This kind of extended algorithm would be able to find dynamic features, which would be very useful in many applications. This algorithm, too, can be used to find dynamic features, if some dynamic preprocessing is applied to the data. It would be better, however, if the algorithm could directly take into account the dynamic structure of the input.


next up previous contents
Next: References Up: Master's Thesis Previous: Learning natural data

Harri Lappalainen
Thu May 9 14:06:29 DST 1996