Google has taken another leap in robotics. A table tennis-playing robot has been unveiled. Google used immense amounts of data to program the robot. The robot can play human opponents and get better at playing the game over time.
“Achieving human-level speed and performance on real-world tasks is a north star for the robotics research community.” These are the opening remarks of a paper written by a team of Google scientists who helped create, train, and test the table tennis bot.
Google acknowledges the fact that the field of robotics is advancing at an unprecedented speed. Humanoids are working in diverse fields starting from kitchens to BMW’s factories. But the one thing that Google wants to add in robots that have up till now been lacking or not up to the mark is speed.
The new robot has just that, speed. It is still not at pro levels when playing table tennis but it managed to have a 45% success rate. It played a total of 29 matches beating 13 opponents with ease. This percentage is way better than a lot of us.
“Even a few months back, we projected that realistically the robot may not be able to win against people it had not played before,”. Pannag Sanketi told MIT Technology Review. “The system certainly exceeded our expectations. The way the robot outmaneuvered even strong opponents was mind-blowing.” Sanketi, who led the project, is the senior staff software engineer at Google DeepMind. Google’s DeepMind is the AI branch of the company, so this research was ultimately as much about data sets and decision-making as it was about the actual performance of the paddle-wielding robot.
The system was trained with the help of massive amounts of data about things related to table tennis such as ball states in table tennis including spin, speed, and position. It then used a set of cameras to respond to the challenges and could improvise on the go with the help of restored data. Not only that the humanoid could improve on the fly.
“I’m a big fan of seeing robot systems actually working with and around real humans, and this is a fantastic example of this,” Sanketi told MIT. “It may not be a strong player, but the raw ingredients are there to keep improving and eventually get there.”
The following video shows even more details of the bot in training and the various skills it was able to employ:
The research has been published in an Arxiv paper.
Sources: MIT Technology Review, Google