next up previous
Next: Natural gradient ascent Up: Natural Conjugate Gradient in Previous: Gaussian distribution

Natural and conjugate gradient methods

Many of the traditional optimization algorithms have their direct counterparts in Riemannian space. This paper concentrates on gradient based algorithms, in particular the generalized versions of gradient ascent and conjugate gradient method.

Gradient-based optimization algorithms in Euclidean space operate by deriving a search direction using the gradient at current search point and possibly other information. Then, either a fixed-length step is taken or a line search performed in this direction. The fixed step length can still be adjusted during learning.

When generalizing these methods to Riemannian space, the geometrically most natural approach would be to take the steps or perform the line search along geodesics, which are length-minimizing curves and hence Riemannian counterparts of straight lines. In practice this is rarely done because the mathematical forms of geodesics can be very complicated thus making operations with them computationally expensive. Euclidean straight lines are used instead of geodesics in this work as well.



Subsections

Tapani Raiko 2007-09-11