next up previous contents
Next: Computer Vision-ECCV 2010 [14] Up: Summary of References Related Previous: Convolutional Deep Belief Networks   Contents

Subsections

Tiled convolutional neural networks. [55]

Original Abstract

Convolutional neural networks (CNNs) have been successfully applied to many tasks such as digit and object recognition. Using convolutional (tied) weights significantly reduces the number of parameters that have to be learned, and also allows translational invariance to be hard-coded into the architecture. In this paper, we consider the problem of learning invariances, rather than relying on hardcoding. We propose tiled convolution neural networks (Tiled CNNs), which use a regular “tiled ” pattern of tied weights that does not require that adjacent hidden units share identical weights, but instead requires only that hidden units k steps away from each other to have tied weights. By pooling over neighboring units, this architecture is able to learn complex invariances (such as scale and rotational invariance) beyond translational invariance. Further, it also enjoys much of CNNs’ advantage of having a relatively small number of learned parameters (such as ease of learning and greater scalability). We provide an efficient learning algorithm for Tiled CNNs based on Topographic ICA, and show that learning complex invariant features allows us to achieve highly competitive results for both the NORB and CIFAR-10 datasets.


next up previous contents
Next: Computer Vision-ECCV 2010 [14] Up: Summary of References Related Previous: Convolutional Deep Belief Networks   Contents
Miquel Perello Nieto 2014-11-28