next up previous contents
Next: Deep Generative Stochastic Networks Up: Summary of References Related Previous: Network In Network [64]   Contents

Subsections

Improving Deep Neural Networks with Probabilistic Maxout Units [89]

Original Abstract

We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN).


Miquel Perello Nieto 2014-11-28