next up previous contents
Next: Adjustment Up: Learning Algorithm Previous: Learning Algorithm

Initialisation

The network is initialised as a single layer, that is n=1. This means that there are only variance neurons connected to the observations. A new layer i>1 can be added during the learning. The means of matrixes Ai-1 and Bi-1 are initialised by applying vector quantisation [1] to the whitened mean of concatenated vectors si-1(t) and ui-1(t)

\begin{displaymath}\mathbf{x}_1(t) = \left( \begin{array}{c}\mathbf{s}_{i-1}(t) \\ \mathbf{u}_{i-1}(t) \end{array} \right).
\end{displaymath} (6.1)

The whitened vector x2(t) of x1(t) is obtained from singular value decomposition

x2(t) = D-1/2Vx1(t), (6.2)

where V contains the orthonormal eigenvectors of the covariance matrix of x1(t) and D is the diagonal matrix of its eigenvalues. Each x2(t) is matched to one of the normalised model vectors Mk

\begin{displaymath}W(t) = \arg \max_k \mathbf{M}_k^{T}\mathbf{x}_2(t)
\end{displaymath} (6.3)

and the model vector is moved to the mean of the vectors x2(t) that are matched to it

\begin{displaymath}\mathbf{M}_k = \frac{\sum_{t\mid W(t)=k} \mathbf{x}_2(t)}{\sum_{t\mid W(t)=k} 1}.
\end{displaymath} (6.4)

Finally the initial values for Ai-1 and Bi-1 are

 \begin{displaymath}\left( \begin{array}{c}\mathbf{A}_{i-1} \\ \mathbf{B}_{i-1} \...
...rray} \right)
= \beta \mathbf{V}^T\mathbf{D}^{1/2}\mathbf{M}.
\end{displaymath} (6.5)

The scaling factor $\beta$ should be selected such that the corresponding sources would operate in an appropriate range. Here the value $\beta=2$ was used. It means that f(si) = 1 corresponds to twice the length of a model vector. The selection is further discussed in Chapter [*].

The posterior means of sources si(t) were initialised to -2 and the means of ui(t) were initialised to -1. These very simple initial values of the sources are not harmful, because of a special state explained in Section [*]. The posterior variances of si(t), ui(t), Ai-1 and Bi-1 are initialised to small values.


next up previous contents
Next: Adjustment Up: Learning Algorithm Previous: Learning Algorithm
Tapani Raiko
2001-12-10