One of the more famous subsidiaries of Google is its artificial intelligence unit, Deepmind. This London-based company took the world of AI research by storm in 2013 when it built a machine that learnt to play conventional video games, such as Pong and Breakout, and then quickly achieved superhuman performance.
That heralded a series of impressive advances. Chief among these was the AlphaGo machine that reached superhuman levels at the ancient Chinese game of Go. More recently, its AlphaFold machine outperformed all other approaches in tackling the long-standing problem of protein folding.
So an interesting question is what problem the company is turning to next.
Now we have an answer. Deepmind has created an intelligent agent that has learnt how to play soccer. Not just high level skills such as how to tackle, pass and play in a team, but how to control a fully articulated human body in a way that performs these actions like a human. The result is an impressive simulation of soccer in a way that is reminiscent of human players, albeit naïve and ungainly ones.
The approach is described by Siqi Liu and colleagues at Deepmind. The first task is to give the intelligent agent full control over a humanoid figure with all the joints and articulation — 56 degrees of freedom that a real human has.
The agent learns to control this humanoid in a simulated environment with ordinary gravity and other laws of physics built in. It does this by learning to copy the movement of real footballers captured via standard motion capture techniques. These movements include running, changing direction, kicking and so on. The AI humanoids then practice mid-level skills such as dribbling, following the ball and shooting. Finally, the humanoids play in 2 v 2 games in which the winning team is the one that scores first.
One of the impressive outcomes from this process is that the humanoids learn tactics of various kinds. “They develop awareness of others and learn to play as a team, successfully bridging the gap between low-level motor control at a time scale of milliseconds, and coordinated goal-directed behaviour as a team at the timescale of tens of seconds,” say Liu and colleagues. Footage of these games along with the way the players learn is available on line.
What makes this work standout is that Deepmind takes on these challenges together while in the past, they have usually been tackled separately. That’s important because the emergent behaviour of the players depends crucially on their agility and their naturalistic movement, which shows the advantage of combining these approaches. “The results demonstrate that artificial agents can indeed learn to coordinate complex movements in order to interact with objects and achieve long-horizon goals in cooperation with other agents,” say the team.
Interestingly, the players learn to pass but don’t seem to learn how to run into space. Perhaps that because this often requires players to run away from the ball. Without that ability, the patterns of play are reminiscent of those of young children, who tend to chase the ball in a herd.
Older children develop a sense of space and adult players spending large portions of the game running into space or closing down space that opposition players could run into, all without the ball.
But Deepmind’s approach is in its infancy and has the potential to advance significantly. The obvious next step is to play games with larger teams and to see what behaviour emerges. “Larger teams might also lead to the emergence of more sophisticated tactics,” say the researchers.
Robot Strategies
Deepmind has also significantly simplified the rules of football — no throw ins, no penalties, no dedicated goal keepers. The new skills required for this will need some training for the AI humanoids but it may also lead to the development of different playing styles.
Why would Deepmind be interested in such a seemingly frivolous pursuit? The answer is probably to better understand how to use AI to solve real world problems with complex movement strategies. “We believe that simulation-based studies can help us understand aspects of the computational principles that may eventually enable us to generate similar behaviours in the real world,” say Liu and co.
And there may be some prizes to be had along the way. First is the RoboCup project, in which teams of humanoid robots play soccer against each other. The games are slow, stilted and comical. So it’s not hard to imagine Deepmind’s simulation becoming a powerful force in robotic football.
Then there is the potential for gaming. It may be possible to give humans some control over the behaviour of the players, rather like the current Fifa soccer video games. It may even be possible to incorporate humans into these simulated games using motion capture technology.
Finally, there is the possibility that 11-a-side simulations might become more advanced than human games. AlphaGo discovered entirely new playing strategies in Go, a game that has been played for centuries. Is it impossible to imagine Deepmind discovering new tactics and gameplans for football? Given its record in other areas, it would be foolish to rule it out.
Ref: From Motor Control to Team Play in Simulated Humanoid Football: arxiv.org/abs/2105.12196