next up previous
Next: Principal Component Analysis with Up: Principal Component Analysis Previous: Principal Component Analysis

Subspace Learning Algorithm

works by applying gradient descent to the reconstruction error (1) directly yielding the update rules

$\displaystyle \mathbf{A}\leftarrow \mathbf{A}+ \gamma ( \mathbf{X}- \mathbf{A}\...
...tarrow \mathbf{S}+ \gamma \mathbf{A}^T ( \mathbf{X}- \mathbf{A}\mathbf{S}) \, .$ (2)

Note that with $ \mathbf{S}= \mathbf{A}^T \mathbf{X}$ the update rule for $ \mathbf{A}$ is a batch version of Oja's subspace rule [7]. The algorithm finds a basis in the subspace of the largest principal components. If needed, the end result can be transformed into the PCA solution by proper orthogonalization of $ \mathbf{A}$ and $ \mathbf{S}$.



Tapani Raiko 2007-07-16