Artificial intelligence experts said it wouldn’t happen in 2016 — even 2030 would be a stretch. But it did.
In March, AlphaGo, a program from Google’s AI research company, DeepMind, defeated 18-time world champion Go player Lee Sedol, 4-1, in a historic showdown in South Korea.
Go is an ancient Chinese board game that’s elegantly simple, yet wickedly difficult to master because of the near-infinite number of legal moves on the board’s 19-by-19 grid. AI researchers have pushed for years to build an algorithm that could handle this complexity and topple human Go champions.
AlphaGo pairs a traditional Monte Carlo tree search (identifying an optimal move by playing the remainder of the game over and over in its “imagination”) with two kinds of artificial neural networks: one that predicts the next move and another that evaluates the winner of each board position.
The ancient Chinese board game Go has a ...