Watching Google's AI Play Go

It is not a human move — Fan Hui said. We are teaching computers how to think.

At first, Fan Hui thought the move was rather odd. But then he saw its beauty.

“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” It’s a word he keeps repeating. Beautiful. Beautiful. Beautiful.

The move in question was the 37th in the second game of the historic Go match between Lee Sedol, one of the world’s top players, and AlphaGo, an artificially intelligent computing system built by researchers at Google. Inside the towering Four Seasons hotel in downtown Seoul, the game was approaching the end of its first hour when AlphaGo instructed its human assistant to place a black stone in a largely open area on the right-hand side of the 19-by-19 grid that defines this ancient game. And just about everyone was shocked.

It’s at the same time exciting and terrifying see how we are able to teach machines how to think.

The most remarkable feat though, is how despite we are using algorithms to emulate the way we learn, machines are developing their own way of thinking. Fanu Hui thought it was not a human move, which seems like an obvious statement, but it outlines an amazing reality, which is that the current state of the machine’s mind followed a development path with no human intervention at all.

It inevitably reminded me of PlaNet the deep-learning machine (also developed by Google fellows) that worked out the location of almost any photo using only the pixels it contained. It plainly beat humans guessing photo locations, but it didn’t rely on some of the cues we are used to, instead “we think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-traveled human to distinguish.”

First published on March 12, 2016