In stochastic sampling one generates a set of samples of models, whose distribution approximates the posterior probability of the models [33]. There are several techniques having slightly different properties, but in general the methods yield good approximations of the posterior probability of the models but are computationally demanding. To some extent the trade-off between efficiency and accuracy can be controlled by adjusting the number of generated samples.
For simple problems, the stochastic sampling approach is attractive because it poses the minimal amount of restrictions on the structure of the model and does not require careful design of the learning algorithm. For an accessible presentation of stochastic sampling methods from the point of view of neural networks, see [92].