Artificial intelligence experts said it wouldn’t happen in 2016 — even 2030 would be a stretch. But it did.
In March, AlphaGo, a program from Google’s AI research company, DeepMind, defeated 18-time world champion Go player Lee Sedol, 4-1, in a historic showdown in South Korea.
Go is an ancient Chinese board game that’s elegantly simple, yet wickedly difficult to master because of the near-infinite number of legal moves on the board’s 19-by-19 grid. AI researchers have pushed for years to build an algorithm that could handle this complexity and topple human Go champions.
AlphaGo pairs a traditional Monte Carlo tree search (identifying an optimal move by playing the remainder of the game over and over in its “imagination”) with two kinds of artificial neural networks: one that predicts the next move and another that evaluates the winner of each board position.
Before playing a human, AlphaGo used its neural networks to analyze 30 million moves made by human experts, and then discovered new strategies by playing itself thousands of times.
During the showdown with Sedol, AlphaGo made a move a professional player would never make. Move 37 was so unorthodox, Sedol left the room to regain his composure. Was it a lucky mistake? Or had AlphaGo advanced beyond human understanding of the game?