next up previous contents
Next: Segmentation of annotated data Up: Comparison with other models Previous: The experimental setting   Contents

The results

The results reached by different models are summarised in Table 7.1. The static NFA model gets the worst score in describing the data. It is a little faster than the NSSM and switching NSSM but significantly slower than the HMM. The HMM is a little better and it is clearly the fastest of the algorithms. The NSSM is significantly better than the HMM but takes quite a lot more time. The switching NSSM is a clear winner in describing the data but it is also the slowest of all the algorithms. The difference in speeds of NSSMs with and without switching is relatively small.


Table: The results of the model comparison experiment. The second column contains the values of the ensemble learning cost function attained. Lower values are better. The values translate to probabilities as $ p(X \vert \mathcal{H}) \approx e^{-C}$. The third column contains a rough estimate on the time needed to run one simulation with Matlab on a single relatively fast RISC processor.
Model Cost function value Time needed
NFA $ 111\,041$ A few days
HMM $ 104\,654$ About an hour
NSSM $ 90\,955$ About a week
Switching NSSM $ 82\,410$ More than a week

The simulations with NFA, NSSM and switching NSSM were run using a 15 dimensional latent space. The results would probably have been slightly better with larger dimensionality, but unfortunately the current optimisation algorithm used for the models is somewhat unstable above that limit. Optimisation of the structures of the MLP networks would also have helped, but it would have taken too much time to be practical. The present results are thus the ones attained by taking the best of a few simulations with the same fixed structure but different random initialisations.


next up previous contents
Next: Segmentation of annotated data Up: Comparison with other models Previous: The experimental setting   Contents
Antti Honkela 2001-05-30