Master's Thesis

A computationally efficient algorithm

for finding sparse codes

A postscript version of the thesis is also available (17.1 Mb)

An important aspect of any kind of information processing is the way in which the information is represented. If we are using representational units, neurons, which can be either "active" or "passive", then an important characteristic of the code is the activity ratio, the fraction of active neurons at any one time. At its lowest value it is local representation, where each state is represented by a single active unit from a pool in which all other units are silent. In dense distributed representation, each state is represented on average by about half of the units being active. Codes with low activity ratios are called sparse codes.

The activity ratio affects several aspects of information processing such as the architecture and robustness of networks, the number of distinct states that can be represented and stored, generalisation properties, and the speed and rules of learning. Sparse codes combine advantages of local and dense codes while avoiding most of their drawbacks.

In this work, a new efficient algorithm for finding sparse codes is presented. The computational complexity depends linearly on the number of neurons in the network, as opposed to previous algorithms, where the complexity has typically been quadratically dependent on the number of neurons. The ability of the algorithm to find meaningful features has been demonstrated in simulations with artificial and natural data.

- Contents
- Introduction
- Previous algorithms for sparse coding
- Derivation of the new algorithm
- Evaluation of the algorithm
- Discussion
- References
- Derivations
- About this document ...

Thu May 9 14:06:29 DST 1996