It is also possible to minimize the reconstruction error (2) by any optimization algorithm. Applying the gradient descent algorithm yields rules for simultaneous updates
![]() |
(6) |
A possible speed-up to the subspace learning algorithm is to use the natural gradient [10] for the space of matrices. This yields the update rules
![]() |
(7) |
If needed, the end result of subspace analysis can be transformed into the PCA solution,
for instance, by computing the eigenvalue decomposition
and the singular value
decomposition
. The transformed
is formed from the first
columns
of
and the transformed
from the first
rows of
.
Note that the required decompositions are computationally lighter than
the ones done to the data matrix directly.