next up previous
Next: RELATED WORK Up: EXPERIMENTS Previous: Comparison of Estimators

Re-learning parameters

We implemented the algorithms for LOHMM using Sicstus-$ 3$.$ 8$.$ 6$ Prolog. The LOHMM in (1) was used to generate data sequences of length 10. The train set consisted of 20 sequences and the test set consisted of 50 sequences. The parameters of the LOHMM were randomly initialised and estimated from fraction of the training data using the componentwise Bayes estimator. It took about 10 iterations to reach our stopping criterion: increase of less than $ 0.1$ in the log likelihood of the training data. We measured the likelihood of the test set for each learned model. Figure 1 shows how the results (expectedly) get better with more data to learn from.

Figure 1: Log likelihood of the testing set as the function of number of sequences used for training. The mean and the standard deviation of 5 runs is shown. The horizontal line corresponds to the LOHMM which generated the data.
\begin{figure}\begin{center}
\epsfig{file=balls_res.eps,width=0.5\textwidth} \end{center}
\end{figure}



Tapani Raiko 2003-07-09