Next: INTRODUCTION
BAYESIAN LEARNING OF LOGICAL HIDDEN MARKOV MODELS
T. Raiko1,2 - K. Kersting1 - J. Karhunen2 - L. De Raedt1 
minipage[b]7cm
 center
 1Institute for Computer Science 
Machine Learning Lab
Albert-Ludwigs University of Freiburg
Georges-Koehler-Allee, Building 079
79112 Freiburg, Germany
   minipage[b]7cm
 center 
 2Helsinki University of Technology 
Laboratory of Computer and
Information Science, 
P.O. Box 5400,
02015 HUT, Finland
  
Abstract:
   Logical hidden Markov models (LOHMMs) are a generalisation
    of hidden Markov models to analyze sequences of logical atoms.
    Transitions are factorized into two steps, selecting an atom and
    instantiating the variables.  Unification is used to share
    information among states, and between states and observations.  In
    this paper, we show how LOHMMs can be learned using Bayesian
    methods. Some estimators are compared and parameter estimation is
    tested with synthetic data.  
Tapani Raiko
2003-07-09