next up previous
Next: Conjugate gradient methods and Up: Natural and conjugate gradient Previous: Natural and conjugate gradient

Natural gradient ascent

The natural gradient learning algorithm is analogous to conventional gradient ascent algorithm and is given by the iteration

$\displaystyle \boldsymbol{\xi}_k = \boldsymbol{\xi}_{k-1} + \gamma \tilde{\nabla} \mathcal{F}(\boldsymbol{\xi}_{k-1}),$ (12)

where the step size $ \gamma$ can either be adjusted adaptively during learning [9] or computed for each iteration using e.g. line search. In general, the performance of natural gradient learning is superior to conventional gradient learning when the problem space is Riemannian; see [9].



Tapani Raiko 2007-09-11