next up previous
Next: Experiments Up: The Go-Playing Program Called Previous: Benson's algorithm

Related work and Go81Console

The idea of playing the game stochastically to the end many times from the current position originates from Abramson [2]. He suggested that the expected outcome of random games provides a heuristic evaluator that is not only domain-independent and easy to compute but also accurate at least in some domains. Results were good in Tic-tac-toe and Othello and he also experimented in Chess. The idea was applied to Go by Bruegmann [6] and recently developed further by Bouzy et al. [5]. Their idea is quite comparable to Go81. Both avoid evaluating unfinished game states. Their ants are even more naïve - they only avoid playing inside one's eyes. Otherwise the move evaluation is given by the swarm AI in the form of giving a value (a smell trace, if you wish) to each point on the board, regardless of when in the future the point will be used. These values are changed based on the outcome of these random games by trying to estimate the expected outcome of the games where the particular point was played. As one of the perspectives of their work, they mention the possibility of using patterns for the ants, which would bring the ants quite close to Go81.

Go81Console combines Go81 with smell traces. It is assumed that the smell trace becomes more inaccurate when the game progresses, so the ant follows the smell trace more closely in the near future and becomes more independent towards the end. Similarly, a larger weight is given to the moves played in the near future, when adjusting the evaluations. The program plays the game 100 times to the end from the current situation before selecting the next move (simply taking the largest evaluation in the gathered table). It is thus unreasonably slow for current handheld computers but quite fast for a desktop computer.


next up previous
Next: Experiments Up: The Go-Playing Program Called Previous: Benson's algorithm
Tapani Raiko 2005-05-10