next up previous
Next: CONCLUSIONS Up: BAYESIAN LEARNING OF LOGICAL Previous: Re-learning parameters

RELATED WORK

HMMs have been extended in a number of different ways e.g. hierarchical HMMs [3], factorial HMMs [6] and based on tree automata [4]. Relational Markov Models (RMMs) [1] also use logical representations, but they do not allow for variable binding, unification nor hidden states. LOHMMs are introduced in [10] concentrating on the application to the secondary structure of proteins.



Tapani Raiko 2003-07-09