next up previous contents
Next: Self-Organising Map Up: Nonlinear Models Previous: Mixtures of Linear Models

   
Nonlinear Component Analysis

The basic idea of nonlinear component analysis (NCA) or kernel PCA [60] is to replace the covariance matrix in equation ([*]) with

 \begin{displaymath}\mathbf{C} = E\left\{ \Phi(\mathbf{x}(t))\Phi(\mathbf{x}(t))^T \right\},
\end{displaymath} (2.8)

where $\Phi$ is a fixed nonlinear mapping to a feature space which has a larger dimensionality than the data space. The principal components are then computed in the feature space.

The mapping $\Phi$ is typically not a surjection, i.e. onto and therefore the reconstruction of a data vector given the extracted components can be problematic. Linear principal components can be visualised easily since they correspond to vectors or directions in the data space, whereas the kernel-based principal components do not have a counterpart in the data space. Therefore NCA is not viewed as a generative model.

Schölkopf et al. [60] have developed an efficient algorithm for NCA where the dimensionality of the feature space can be very large. They also developed a method for iteratively finding a reconstruction in the data space for a point in the feature space. Their experiments suggest that compared to PCA, NCA extracts features which are more useful for classification purposes. The same approach can be used to construct e.g. nonlinear ICA.


next up previous contents
Next: Self-Organising Map Up: Nonlinear Models Previous: Mixtures of Linear Models
Tapani Raiko
2001-12-10