next up previous contents
Next: Why does unsupervised pre-training Up: Summary of References Related Previous: What is the best   Contents

Subsections

Tiled convolutional neural networks [68]

Original Abstract

Convolutional neural networks (CNNs) have been successfully applied to manytasks such as digit and object recognition. Using convolutional (tied) weightssignificantly reduces the number of parameters that have to be learned, and alsoallows translational invariance to be hard-coded into the architecture. In this pa-per, we consider the problem of learning invariances, rather than relying on hard-coding. We propose tiled convolution neural networks (Tiled CNNs), which usea regular “tiled” pattern of tied weights that does not require that adjacent hiddenunits share identical weights, but instead requires only that hidden units k stepsaway from each other to have tied weights. By pooling over neighboring units,this architecture is able to learn complex invariances (such as scale and rotationalinvariance) beyond translational invariance. Further, it also enjoys much of CNNs’advantage of having a relatively small number of learned parameters (such as easeof learning and greater scalability). We provide an efficient learning algorithm forTiled CNNs based on Topographic ICA, and show that learning complex invariantfeatures allows us to achieve highly competitive results for both the NORB andCIFAR-10 datasets.


next up previous contents
Next: Why does unsupervised pre-training Up: Summary of References Related Previous: What is the best   Contents
Miquel Perello Nieto 2014-11-28