The researchers trained physically simulated AI to play two-vs-two games, in an experiment that sought to advance coordination between AI systems and offer new pathways towards building artificial general intelligence (AGI) that is a similar level to a human.
“Our agents acquired skills including agile locomotion, passing, and division of labour as demonstrated by a range of statistics,” DeepMind researchers wrote in a blog post.
“The players exhibit both agile high-frequency motor control and long-term decision-making that involved anticipation of teammates’ behaviours, leading to coordinated team play.”
The players learned to jostle for the ball, perform through balls to their teammate and to chip and tackle their opponents.
Separate simulations saw the humanoids learn how to perform complex tasks with their arms, such as throwing and catching a ball.
With their findings from the digital realm, the DeepMind researchers were able to instruct humanoid and dog robots to walk and dribble a football in a “natural-looking and robust way”.
The research was detailed in a study, titled ‘From motor control to team play in simulated humanoid football’, published in the journal Science Robotics on Wednesday.
It describes how the AI were trained to imitate football-specific skills and were rewarded if the movements led to improved perfomance, such as a goal. The task required motor control, long-horizon decision making, and the ability to coordinate with other AIs.
“We optimised teams of agents to play simulated football via reinforcement learning, constraining the solution space to that of plausible movements learned using human motion capture data,” the study states.
“The result is a team of coordinated humanoid football players that exhibit complex behavior at different scales, quantified by a range of analysis and statistics.”