The network was built up in stages shown in Figure . It was initialised with only a single layer. The second layer was created at sweep 20 and the third at 100. After creating a layer only its sources were updated for 10 sweeps and it was ``kept alive'' for 50 sweeps. The system was ``regenerated'' at sweeps 300 and 400 and after that only the sources were updated for 5 sweeps and the system was ``kept alive'' for 50 sweeps. The network was ``rebooted'' at sweeps 500, 600 and 700 and only the sources were updated for the next 40 sweeps. Each sweep corresponds to going through all the data vectors and updating each latent variable node once.
The size of the first layer n1 is equal to the size of the data vector, that is 36. The second layer was created with 30 neurons and additional 3 in each regeneration. The third layer layer was created with 5 dimensions and additional 2 in regeneration. Dead neurons were removed every 20 sweeps.
This learning procedure was designed for this problem. Future work includes the complete automation of the learning process including the selection of the number of sweeps.