next up previous contents
Next: Cortex and nonlinear factor Up: Biological relevance Previous: Biological relevance

Cerebral cortex and generative models

The cerebral cortex maintains a model of the environment. Its perceptual machinery constantly seeks explanations for sensory inputs [4]. It seems likely that unsupervised learning with generative models is the appropriate interpretation which helps understanding many aspects of the learning taking place in the cortex. It is known that the abstraction level of representation in the cortex gradually increases from primary sensory areas to higher cortical areas (see, e.g., [127]) where it is possible to find neurons that react as if they would bridge different modalities in the manner depicted in figure 3.

At least two independent experimental findings and one theoretical argument supports generative learning over signal transformation or auto-associative learning. First, the signal transformation assumption does not predict backward connections and auto-associative learning predicts roughly equal amounts of forward and backward connections between different levels of brain areas (forward connection here means the direction from sensory to higher areas). In reality, there are up to ten times as many backward connections between cortical areas as there are forward connections (see, e.g., [26]). This fits well with the generative learning assumption, because backward connections define the meaning of the representation and forward connections carry the gradient information or error signals needed to update the representation and for that purpose, gradient information needs not be very accurate.

Second, the temporal behaviour of the forward signals from visual area V1 to V2 has been shown to fit well to the interpretation of error signals [105], and also to be modulated appropriately if the activity of V2 is blocked [50].

Third, in generative learning, the number of connections needs to be proportional to the number of neurons N while in the signal transformation approach the number of connections requires to be proportional to N2 which is much more than actually observed in the brain. In order for a large network of neurons to learn, the neurons need to have a way of informing each other when something is already learned to avoid all neurons from learning the same thing. In generative learning, the forward signals carry the error signals and therefore a persistent signal indicates that no other neuron is representing the input. In the signal transformation approach, each neuron has to inform all other neurons that they have learned something. There are short range inhibitory lateral connections in the cortex, but not enough to support signal transformation interpretation [32]. The purpose of the inhibitory lateral connections can be, for instance, to assist in finding the correct activations when error signals arrive [74,29]. Only those neurons need to be laterally connected which represent very similar things (as defined by the backward connections) thus explaining the short range of the lateral connections.


next up previous contents
Next: Cortex and nonlinear factor Up: Biological relevance Previous: Biological relevance
Harri Valpola
2000-10-31