Interpreted pedantically, MLP networks used in the nonlinear factor analysis model in this thesis are not realistic neuronal models of the brain. It is clear that the neurons in the brain are quite different from the ones used in a typical MLP network, but it is possible to see many of the aspects of the MLP networks as abstractions of the biological brain. First, biological neurons fire action potentials and the output activation of the neurons in an MLP network is an idealisation of the firing rate. Secondly, there are various kinds of interneurons in the cortex which the MLP networks lack. However, it is possible to view each neuron of an MLP network as an abstraction of one microcolumn or one pyramidal neuron and several interneurons. In , the microcolumns have been interpreted as Kalman filters.
The representational capacity of the factor analysis model is weaker than that of the biological brain. To be more specific, the model cannot adequately represent objects and therefore cannot represent relations between objects. There is much evidence suggesting that the temporal structure of the firing of biological neurons in the cortex carries information related to object representations. In the brain, the object representations seem to be linked with synchronous activity of the neurons; the neurons which represent features belonging to one object fire synchronously [24,37,36].
Representing only the firing rate of the neurons misses the temporal information and results in an inability to represent the binding between features which would define objects. Consequently, the representations also lack relations between objects. This is one of the most serious drawbacks of most of the current neural network models. Many attempts have been made to build models with neurons whose activity is represented by firing of pulses as with biological neurons . Since the real valued abstraction of the firing rate has served as a good computational simplification, modifications which represent both the activity and the phase or binding of the neurons have also been proposed (e.g., ).
Whether the standard real valued neurons are replaced by firing or more abstracted neurons, it can be hoped that the lessons learned from the simple neurons used in most neural network models today will still be useful. This seems realistic since so many aspects of the real brain can be interpreted from the point of view of simple real valued neurons. If the brain is the kind of virtual machine built on parallel networks which Dennet proposes , then it seems possible that we can also design similar machines by starting with the parallel networks discussed in this thesis and then modify them slightly so that sequential processing is implemented.