We compared PCA (Section 3), regularized PCA
(Section 4) and VB-PCA (Section 4) by
computing the root mean square reconstruction error for the validation
set, that is, ratings that were not used for training. We tested
VB-PCA by firstly fixing to large values (this run is marked
as VB1 in Fig. 2) and secondly by adapting them
(marked as VB2) to isolate the effects of the two types of
regularization. We initialized regularized PCA and VB1 using
unregularized subspace learning algorithm with
transformed into the PCA solution. VB2 was initialized using VB1. The
parameter
was set to
.
Fig. 2 (right) shows the results. The
performance of unregularized PCA starts to degrade after a while of
learning, especially with large values of . This effect, known
as overlearning, did not appear with VB. Regularization helped a lot
and the best results were obtained using VB2: The final validation rms
error was 0.9180 and the training rms error was 0.7826 which is
naturally a bit larger than the unregularized 0.7657.