Agibot has unveiled its next-generation industrial humanoid robot, the Agibot G2, designed to redefine automation in commercial and industrial settings. The new model combines advanced motion mechanics, multimodal AI interaction, and full autonomy to handle a diverse range of operations from logistics and factory work to customer engagement and guided tours.
In a company statement, Dr. Yao Maoqing, Partner and Senior Vice President at Agibot, emphasized the purpose behind this creation: “We envision Agibot G2 relieving humans from repetitive, labor-intensive, and safety-risk-prone work, enabling people to focus on more creative tasks.”
The G2 represents a major step forward from the company’s earlier prototype introduced in late 2023. It incorporates a new generation of hardware, making it one of the most capable industrial-grade embodied robots available today. The machine uses high-performance joint actuators and an array of sensors for precise and safe navigation, featuring full-scene, omnidirectional obstacle avoidance. One of its key design upgrades is a 3-degree-of-freedom waist, allowing it to bend, twist, and sway in lifelike motion. This flexibility is enhanced by what Agibot describes as the world’s first cross-shaped wrist force-controlled arm, equipped with high-precision torque sensors that detect and adjust to external forces in real time. The result is smoother and more natural movement even delicate enough to handle fragile objects like a raw egg without breaking it.
Agibot has designed the G2 for uninterrupted operation with a dual-battery, hot-swappable system and autonomous charging capabilities, ensuring continuous productivity around the clock. Its rapid deployment toolchain also makes setup simple, allowing non-specialists to configure and deploy the robot quickly in industrial settings.
Beyond its hardware, the G2 stands out for its intelligence. It is powered by Agibot’s proprietary AI architecture built on the GO-1 and GE-1 models. The GO-1 functions as a three-layer “brain,” combining a Vision-Language Model for perception, a Latent Planner for task sequencing, and an Action Expert for execution. This enables the robot to understand a single instruction and autonomously complete complex, multi-step tasks. Complementing this, the GE-1 model adds predictive reasoning, allowing the G2 to virtually “rehearse” its actions before performing them in real life.
All this computing power runs on the NVIDIA Jetson Thor T5000 platform, capable of delivering up to 2070 TFLOPS (FP4) of processing performance. This setup provides the G2 with near-instant decision-making under ten milliseconds of latency allowing it to process visual, spatial, and motion data in real time without depending on cloud connectivity. Developers can also test AI models in virtual environments before deploying them, drastically reducing training and iteration time.
The G2 has already undergone extensive testing, passing more than 130 component and environmental trials, including heat, cold, and static resistance tests. It has been proven in multiple live industrial applications, from collaborating on automotive parts assembly to performing delicate electronics production tasks like RAM installation within an hour of AI-assisted training. In logistics settings, the robot’s dexterous OmniHand enables it to handle parcels of different shapes and materials while navigating factory floors autonomously.
Agibot’s engineers also focused on human-like interaction. The G2 uses expressive gestures and 360-degree perception for safe engagement with people. With a built-in SDK for customization, the robot can be tailored for a variety of needs, from manufacturing and warehousing to security, research, and education. Following successful pilot programs, the G2 is already being integrated into automotive and consumer electronics production lines.

