Since reading about Arthur Samuel's checkers playing program in a book called Emergence, I've wanted to try out the idea. Go seemed like an ideal game to apply the ideas to. Go has held out against AI attempts for a long time now, although it is beginning to fall, as technique called Monte Carlo something or other has been used to beat Go Pros. However, this method is basically a brute force, random approach (although the randomness is directed in a clever way), so I don't think it tells us anything interesting about either Go or intelligence.

Like my heterocyst simulation, I started this project a long time ago, before I started a blog. I'll therefore try to write about my progress using old emails.


My Go program at the start of a game

If I'd started this blog about a year earlier, I would have lots written about my Go program. Thankfully, I suspect that a lot of what I would have written, I did write, only in email form.

For now I'll just put up an image of the program I wrote (using Python and Tkinter), as it currently is. It was actually the project I made whilst learning Python and uses images that I made for my Java program. As you can see, it currently uses a 9x9 board. The program can read SGF files to replay game or you can play along by clicking on the board.

I did also write a couple of very simple AIs to see how easy it was to plug an AI opponent in. I made one AI that tried to maximise the number of liberties it had (which I called PassiveAI), another that tried to minimise the number of liberties its opponent's stones had (AgressiveAI) and one that tried to ensure all its chains had more an a threshold of liberties and then tried to minimise the number of liberties its opponent's stones had (PassiveAgressiveAI).

I had some fun playing them against one another, and perhaps unsurprisingly PassiveAgressiveAI was the best AgressiveAI was the worse because it often played into atari, so the opponents best move was then to immediately take that stone.