Others TV Report, News Report & Photo Gallery

Science Meets The Most Strategic Game

The most important goal of Human vs. Computer Go Competition at IEEE WCCI 2012 will focus on Science Meets the Most Strategic Game, including: (1) 7x7 small board game: MoGoTW has successfully won, at the FUZZ-IEEE 2011(Taipei, Taiwan), twenty games out of twenty with conditions slightly in favor of MoGoTW. We will see if computers can also outperform humans when MoGoTW have conditions slightly in favor of humans, (2) 9x9, 13x13, and 19x19 board games: Humans are still stronger than computers, in particular for the big 19x19 board where strategic elements matter a lot. We will see how far computers are from humans now, and (3) The novel activity for Human vs. Computer Go Competition @ IEEE WCCI 2012 is about the physiological measurements. It is how known that Go is specific, in the sense that brain areas involved in playing Go are not the same as those involved in chess; in particular, more spatial reasoning. We will try to see if EEG, or other physiological signals, are also impacted by various conditions, like playing blindfolded (such as Blind Go), or playing in settings easier, or harder, or with more unnatural situations (such as Kill-All Go) than usual games.

Short Description

The technique of Monte Carlo Tree Search (MCTS) has revolutionized the field of computer game-playing, and is starting to have an impact in other search and optimization domains as well. In past decades, the dominant paradigm in game algorithms was alpha-beta search. This technique, with many refinements and game-specific engineering, lead to breakthrough performances in classic board games such as chess, checkers and Othello. After Deep Blue’s famous victory over Kasparov in 1996, some of the research focus shifted to games where alpha-beta search was not sufficient. Most prominent among these games was the ancient Asian game of Go. During the last few years, the use of MCTS techniques in Computer Go has really taken off, but the groundwork was laid much earlier. In 1990, Abramson proposed to model the expected outcome of a game by averaging the results of many random games. In 1993, Bruegmann proposed Monte-Carlo techniques for Go using almost random games, and developed the refinement he termed all-moves-as-first (AMAF). Ten years later, a group of French researchers working with Bruno Bouzy took up the idea. Bouzy’s Indigo program used Monte-Carlo simulation to decide between the top moves proposed by a classical knowledge-based Go engine. Remi Coulom’s Crazy Stone was the first to add the crucial second element, a selective game tree search controlled by the results of the simulations. he last piece of the puzzle was the Upper-Confidence Tree (UCT) algorithm of Kocsis and Szepesvari, which applied ideas from the theory of multi-armed bandits to the problem of how to selectively grow a game tree. Gelly and Wang developed the first version of MoGo, which among other innovations combined Coulom’s ideas, the UCT algorithm, and pattern-directed simulations. AMAF was revived and extended in Gelly and Silver’s Rapid Action Value Estimate (RAVE), which computes AMAF statistics in all nodes of the UCT tree. Rapid progress in applying knowledge and parallelizing the search followed. Today, programs such as MoGo/MoGoTW, Crazy Stone, Fuego, Many Faces of Go, and Zen have achieved a level of play that seemed unthinkable only a decade ago. These programs are now competitive at a professional level for 9 x9 Go and amateur Dan strength on 19x19.One measure of success is competitions. In Go, Monte-Carlo programs now completely dominate classical programs on all board sizes (though no one has tried boards larger than 19x19). Monte-Carlo programs have achieved considerable success in play against humans. An early sign of things to come was a series of games on a 7x7 board between Crazy Stone and professional 5th Dan Guo Juan. Crazy Stone demonstrated almost perfect play. Since 2008, National University of Tainan (NUTN) in Taiwan and other academic organizations have hosted or organized several human vs. computer Go-related events, including the 2008 Computational Intelligence Forum & World 9x9 Computer Go Championship, and 2009 Invited Games for MoGo vs. Taiwan Professional Go Players (Taiwan Open 2009). Besides, the FUZZ-IEEE 2009: Panel, Invited Sessions, and Human vs. Computer Go Competition was held at the 2009 International Conference on Fuzzy Systems in Aug. 2009. This event was the first human vs. computer Go competition hosted by the IEEE Computational Intelligence Society (CIS) at the IEEE CIS flag conference. In 2010, MoGo and Many Faces of Go achieved wins against strong amateur players on 13x13 with only two handicap stones. On the full 19x19 board, programs have racked up a number of wins (but still a lot more losses) on 6 and 7 handicap stones against top professional Go players; also Zen recently won with handicap 4 against Masaki Takemiya 9p. Also, computer Go Programs have won both as White and Black against top players in 9x9 game.In April 2011, MoGoTW broke a new world record by winning the first 13x13 game against the 5th Dan professional Go player with handicap 3 and reversed komi of 3.5. It also won 3 out of 4 games of Blind Go in 9x9. In June 2011, in the three-day completion held at FUZZ-IEEE 2011, there are four programs, including MoGoTW, Many Faces of Go, Fuego, and Zen, invited to join this competition, and more than ten invited professional Go players accept the challenge, including Chun-Hsun Chou (9P), Ping-Chiang Chou (5P), Joanne Missingham (5P), and Kai-Hsin Chang (4P). The computer Go program Zen from Japan won each competition even playing 19x19 game with Chun-Hsun Chou (9P) with handicap 6, showing that the level of computer Go programs in 19x19 game is estimated at 4D. Also, Many Faces of Go and Zen also won against a 5P Go player in 13x13, with handicap 2 and komi 3.5, improving the April's results by MoGoTW by one stone. In addition, MoGoTW also won all of twenty 7x7 games under a specific komi, that is, setting komi 9.5 and 8.5 as MoGoTW is White and Black, respectively, suggesting that the perfect play is a draw with komi 9. MoGoTW with the adaptive learning ability was first played with the amateur Go players from kyu level to dan level in Taiwan, on May 6 and May 27, 2012. Estimating the level of an opponent is useful for choosing the right strength of an opponent and for attributing relevant ranks to players. We estimate the relation between the strength of a player and the number of simulations needed for a MCTS to have the same strength. In addition, we also play against many players in the same time, and try to estimate their strength. In June 2012, Human vs. Computer Go Competition @ IEEE WCCI 2012 will also see physiological measurements for testing cognitive science on the game of Go.

Human

Taiwanese Professional Go Players will join this competition

- Chun-Hsun Chou (9P)

- Ping-Chiang Chou (5P)

- Joanne Missingham (6P)

- Shang-Rong Tsai (6D)

- Sheng Shu Chang (6D)

- Shi-Jim Yen (6D)

 

Computer Go Program

- MoGo/MoGoTW (France / Taiwan)

- Fuego (Canada)

- Many Faces of Go (USA)

- Zen (Japan)

- Pachi (Czech Republic and France)

- Erica (Taiwan)

- Coldmilk (Taiwan)

Reference

[1] B. Abramson, Expected-outcome: a general model of static evaluation, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 2, pp. 182-193, 1990.

[2] B. Brugmann, Monte Carlo Go, 1993. Online at http://www.ideanest.com/vegos/MonteCarloGo.pdf.

[3] B. Bouzy and T. Cazenave, “Computer Go: an AI-oriented survey,” Artificial Intelligence Journal, vol. 132, no. 1, pp. 39-103, 2001.

[4] C. S. Lee, M. Mueller, and O. Teytaud, “Special Issue on Monte Carlo Techniques and Computer Go”, IEEE Transactions on Computational      Intelligence and AI in Games, vol. 2, no. 4, pp. 225-228. Dec. 2010.

[5] R. Coulom, “Efficient selectivity and backup operators in Monte-Carlo tree search,” in Proceeding of 5th International Conference on Computers and  Games, Turin, Italy, 2006, pp. 72–83.

[6] L. Kocsis and C. Szepesvari, “Bandit based Monte-Carlo planning,” Machine Learning ECML, vol. 4212, pp. 282-293, Springer, 2006.

[7] Y. Wang and S. Gelly, “Modifications of UCT and sequence-like simulations for Monte-Carlo Go,” in Proceedings of the 2007 IEEE Symposium on      Computational Intelligence and Games (CIG07), Hawaii, USA, 2007, pp. 175-182.

[8] C. S. Lee, M. H. Wang, C Chaslot, J. B. Hoock, A. Rimmel, O. Teytaud, S. R. Tsai, S. C. Hsu, and T. P. Hong, “The computational intelligence of MoGo revealed in Taiwan's computer Go tournaments,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 1, no. 1, pp. 73-89, Mar.    2009.

[9] C. S. Lee, M. H. Wang, T. P. Hong, G. Chaslot, J. B. Hoock, A. Rimmel, O. Teytaud, and Y. H. Kuo, "A novel ontology for computer Go knowledge  management," in Proceeding of the 2009 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2009), Jeju Island, Korea, Aug. 19-14, 2009, pp. 1056–1061.

[10] S. J. Yen, C. S. Lee, and O. Teytaud, “Human vs. computer Go competition in FUZZ-IEEE 2009,” International Computer Games Association, vol. 32,  no. 3, pp. 178–180, Sept. 2009.

[11] J. B. Hoock, C. S. Lee, A. Rimmel, F. Teytaud, M. H. Wang, and O. Teytaud, “Intelligent agents for the game of Go,” IEEE Computational  Intelligence Magazine, vol. 5, no. 4, pp. 28-42, Nov. 2010.

[12] C. S. Lee, M. H. Wang, O. Teytaud, and Y. L. Wang, “The game of Go @ IEEE WCCI 2010,” IEEE Computational Intelligence Magazine, vol. 5, no. 4,   pp. 6-7, Nov. 2010.

[13] M. H. Wang, C. S. Lee, Y. L. Wang, M. C. Cheng, O. Teytaud, and S. J. Yen, “The 2010 contest: MoGoTW vs. human Go players,” International        Computer Games Association, vol. 33, no. 1, pp. 47-50, Mar. 2010.