next up previous contents
Next: Experimental results Up: Learning algorithm for the Previous: Learning procedure   Contents


Learning with known state sequence

Sometimes we want to use the switching NSSM to model data we already know something about. With speech data, for instance, we may know the ``correct'' sequence of phonemes in the utterance. This does not mean that learning the HMM part would be unnecessary. The correct segmentation requires determining the times of transitions between the states. Now only the states the model has to pass and their order are given.

Such problems can be solved by estimating the HMM states for a modified model, namely the one that only allows transitions in the correct sequence. These probabilities can then be transformed back to the true state probabilities for the adaptation of the other model parameters. The forward-backward procedure must also be modified slightly as the first and the last state of a sequence are now known for sure.

When the correct state sequences are known, the different orderings of HMM states are no longer equivalent. Therefore the HMM output distribution parameters can, and actually should, all be initialised to zeros. Random initialisation could make the output model for certain state very different from the true output thus making the learning much more difficult.


next up previous contents
Next: Experimental results Up: Learning algorithm for the Previous: Learning procedure   Contents
Antti Honkela 2001-05-30