US robotics firm Figure AI has unveiled a breakthrough in humanoid robot locomotion. A recently released video showcases Figure 02 robots moving with a fluid, human-like gait, replacing the rigid, mechanical walking patterns commonly seen in humanoid robots.
Figure AI attributes this advancement to an RL-driven training process, which allows the robots to learn proprioceptive locomotion strategies purely in simulation. The AI model, developed in a high-fidelity physics simulator, enables the robots to simulate years of movement data within hours. This process leverages domain randomization, ensuring a seamless, zero-shot transfer from virtual environments to real-world applications.
“This enables the fleet of Figure robots to quickly learn robust, proprioceptive locomotion strategies and allows for rapid engineering iteration cycles,” the company stated.
The RL training method involves running thousands of virtual humanoids in parallel, each with varied physical parameters, across a range of simulated terrains and real-world challenges. These include changes in surface textures, actuator dynamics, and obstacles such as slips or unexpected impacts.
By training a single neural network policy to handle all these conditions, Figure AI eliminates the need for additional fine-tuning once the model is deployed in physical robots. This approach, known as zero-shot transfer, ensures that the learned walking behavior translates seamlessly to real-world robots, improving their adaptability and robustness.
The latest iteration of Figure 02 exhibits natural human-like movements, including heel strikes, toe-offs, and synchronized arm swings. The RL controller optimizes for multiple factors, such as velocity tracking, energy efficiency, and resilience against disturbances.
In the demonstration video, ten Figure 02 robots operate using the same RL neural network, demonstrating scalability and consistency across multiple units. This uniform approach eliminates the need for manual adjustments, facilitating large-scale deployment.

As Figure AI gears up for production and wider deployment in 2025, it positions itself as a strong contender in the humanoid robotics sector, competing with Tesla’s Optimus, Agility Robotics’ Digit, and Chinese firms like UBTech Robotics and Unitree Robotics.
Beyond locomotion, Figure AI has also introduced Helix, a Vision-Language-Action (VLA) model designed to enable humanoid robots to understand and execute complex commands using natural language. By integrating Helix with its RL-powered locomotion technology, the company is expanding the potential of humanoid robots in industries ranging from logistics to home assistance.