HNFA+VM, presented in Chapter , was tested
with a number of natural gray-scale images as a data set.
Gaussian noise with standard deviation 0.1 was added to the images
to avoid artefacts caused by the discrete gray levels from 0 to
255. The intensities were scaled to variance one.
10 times 10 image patches were taken randomly from the images to be
used as data vectors. There was a total of 10000 data vectors.
The data matrix
X is thus 100 by 10000. The mean of each
patch was subtracted from the patch and the data was whitened to a
degree
and rotated back to the original space:
![]() |
(8.1) |
Figure shows the matrix
V or the
principal components of the data. There are only 99 components, since
the removal of the mean in each image removes also one of the
intrinsic dimensions. There is a great resemblance to the discrete
cosine transform (DCT), which is widely used in image compression
[21]. Compression and ensemble learning have much in
common as was seen in Subsection
. Taking into account
that there are efficient algorithms for calculating the DCT, it is
clearly a good choice for compression. None of the patches are
localised in either PCA or DCT.
![]() |