Next: Future Work
Up: Learning Nonlinear State-Space Models
Previous: Horizon length
Discussion and Conclusion
Three different control schemes were studied in the framework of
nonlinear state-space models. Direct control is fast to use, but
requires the learning of a policy mapping, which is hard to do well.
Optimistic inference control is a novel method based on Bayesian
inference answering the question: ``Assuming success in the end, what
will happen in near future?'' It is based on a single probabilistic
inference but unfortunately neither of the two tested inference
algorithms work well with it. The third control scheme is a
probabilistic version of the standard nonlinear model-predictive
control, which is based on optimising control signals based on a cost
function. The latter two schemes are both indirect control methods and
they performed comparably well in the experiments.
Subsections
Tapani Raiko
2005-05-23