Quadrupedal robots, which imitate the way animals with four legs move, have become popular lately because they’re good at getting around tricky places and small spaces. While robots that look like humans get a lot of attention, robots like Boston Dynamics’ SPOT, which walk on all fours, have also been impressive in different situations. But these robots struggle with doing delicate tasks because they don’t have hands.
To fix this problem, a team from Carnegie Mellon University, the University of Washington, and Google DeepMind came up with the LocoMan add-on. This clever solution changes the legs of quadrupedal robots so they can do both walking and handling things more precisely.
“Quadrupedal robots are versatile agents capable of performing locomotion and manipulation in complex environments,” Ding Zhao, an associate professor at Carnegie Mellon University and co-author of the research paper, told Tech Xplore. “Traditional designs typically incorporate top-mounted arms for manipulation tasks. However, these configurations may limit the robot’s payload, stability, and efficiency. We do not see a dog with an arm on the back in nature.”
Unlike previous attempts to equip quadrupedal robots with robotic arms, which compromised agility and mobility due to increased weight, the LocoMan system is lightweight, low-cost, and compatible with existing quadrupedal robots. By attaching manipulators to the calves of the robot, the system provides three degrees of freedom, allowing tasks such as picking up and placing objects without sacrificing the robot’s ability to traverse challenging terrain.
“We achieved 6D pose manipulation,” highlighted Changyi Lin, a first-year Ph.D student in Zhao’s lab. Lin explained that the LocoMan— operating under a Whole-Body Control (WBC) framework— seamlessly transitioned across five operational modes.
Through a series of real-world experiments, the research team demonstrated LocoMan’s capabilities, including opening doors, plugging electronics into sockets, and manipulating objects in confined spaces. The robot seamlessly transitions between different operational modes, such as using its gripper hands individually or together for dual-arm manipulation tasks, quadruped walking, and manipulating objects while on the move.
The potential uses of LocoMan are extensive, spanning from industrial environments to rescue operations in disaster areas. By incorporating computer vision and machine learning algorithms, the robot’s abilities will be greatly improved. This will allow it to understand what it sees and follow spoken instructions from people, making interactions more natural.
The research offers a unique perspective on intelligent robots, aiming to complement human capabilities rather than replicate them. By providing quadrupedal robots with the ability to perform complex manipulation tasks in narrow spaces, LocoMan opens up new possibilities for robotics in various domains.
“Our research offers a different perspective on intelligent robots. Rather than replicating humans with a similar morphology, we would like to provide a complementary robot that can do what humans may not want to do,” remarked Zhao. “LocoMan makes it possible for quadrupedal robots to perform complex manipulation tasks in narrow spaces.”
“The integration of vision-language models is anticipated to revolutionize how LocoMan generates actions,” Zhao added. “This could be achieved by interpreting visual perception of environments and processing verbal instructions from humans, enabling a more intuitive and seamless interaction.”
The team’s findings were published in the pre-print journal arXiv, and they plan to continue refining LocoMan’s capabilities and testing it in a wider range of settings to address real-world challenges.