next up previous
Next: Acknowledgements Up: Variational Bayesian approach for Previous: Simulation Results

Discussion and conclusion

Two different control schemes were studied in the framework of variational Bayesian learning of nonlinear state-space models. The first control scheme is stochastic nonlinear model predictive control, which is based on optimising control signals based on a cost function. The second scheme is optimistic inference control, which is based on fixing the desired observations at some point in the future and inferring the state and control signals between the future and the current state.

A controller might be able to carry out active information gathering or probing BarShalom81. It means that in an unknown state, one should first decrease the uncertainty and then take action based on what has been revealed. Probing requires the controller to be able to plan to react to future observations. Optimistic inference control does this automatically in theory, but in practice it would require an even more sophisticated model for the posterior distribution than Equation 6. On a larger scale, to reduce the uncertainty of the model parameters, the controller should balance exploration and exploitation. A good starting point for taking exploration into account is in Thrun92exploration.

Both control schemes presented here are computationally demanding. One possible way to speed up the NMPC algorithm would be to parallelise it. The MLP networks used in this work are not particularly well-suited for parallel computation, but many parts of the computation can still be divided to parts. The novel control scheme, OIC, provides a link between Bayesian inference and model-predictive control, but does not currently compete in efficiency.

Learning nonlinear state-space models seems promising for complex control tasks, where the observations about the system state are incomplete or the dynamics of the system is not well known. The experiments with a simple control task indicated the benefits of the proposed approach. There is still work left in combating high computational complexity and in giving some guarantees on performance especially in unexpected situations or near boundaries.


next up previous
Next: Acknowledgements Up: Variational Bayesian approach for Previous: Simulation Results
Tapani Raiko 2006-08-24