Pune, India | January 06, 2026
NVIDIA has introduced a new series of physical AI models that will redefine robotics development worldwide. The announcement took place at CES 2026 in Las Vegas, highlighting the company’s vision for intelligent autonomous machines. Moreover, these models allow developers to create robots that can perceive, reason, and perform complex tasks reliably.
At the heart of this release are open models and frameworks that accelerate robot learning while reducing development time. NVIDIA’s physical AI models, including Cosmos Transfer 2.5, Cosmos Predict 2.5, Cosmos Reason 2, and Isaac GR00T N1.6, are accessible through platforms such as Hugging Face, enabling global collaboration. Consequently, engineers can experiment, train, and deploy robots faster than traditional methods allowed.
These physical AI models allow robots to generate synthetic data, plan full-body actions, and interpret complex environments autonomously. As a result, teams can focus on innovation rather than repetitive coding, driving faster iteration cycles.
Leading robotics companies have already integrated NVIDIA technology. Boston Dynamics, Caterpillar, Franka Robotics, Humanoid, LG Electronics, and NEURA Robotics demonstrated solutions using physical AI models. Therefore, their robots now perform tasks from industrial assembly to household assistance with greater intelligence and reliability.
NVIDIA CEO Jensen Huang called this launch a “ChatGPT moment for robotics.” He explained that these models not only perceive surroundings but also reason and act. Furthermore, the combination of Jetson processors, CUDA software, Omniverse simulation tools, and open physical AI models provides a full-stack solution for building advanced robots efficiently.
The Jetson T4000 module, powered by Blackwell architecture, delivers four times more AI compute efficiency. Consequently, it will serve as the central processor in robots used for industrial, medical, and home-assistance applications.
Alongside the models, NVIDIA released frameworks such as Isaac Lab Arena and OSMO to unify training across cloud and edge environments. Isaac Lab Arena enables simulation benchmarking, ensuring machines act reliably before deployment. As a result, developers can evaluate performance at scale. OSMO orchestrates synthetic data generation, model training, and testing seamlessly.
Opening physical AI tools to developers has sparked global interest. NVIDIA’s partnership with Hugging Face integrates these models into LeRobot, the open-source robotics framework. In addition, millions of AI and robotics builders now access powerful tools, accelerating global innovation in autonomous systems.
Physical AI models also address a major robotics challenge: data scarcity. Cosmos models can simulate diverse real-world scenarios, therefore significantly expanding training datasets without physical testing costs or risks.
Partners are applying these models across industries. Salesforce combines physical AI with video analysis to reduce operational incidents, while LEM Surgical trains autonomous surgical arms. Meanwhile, applications in logistics, healthcare, and manufacturing demonstrate the technology’s versatility.
This announcement highlights a broader trend: AI is moving into real-world, embodied applications. Consequently, robots are becoming smarter, adaptable, and capable of assisting humans more effectively across industries.
NVIDIA’s open technology strategy clearly aims to empower developers, accelerate AI-driven robotics adoption, and enable the creation of scalable, intelligent machines. Moreover, the physical AI model ecosystem encourages global collaboration and rapid innovation.
With robust computing platforms, open-source accessibility, and collaborative frameworks, NVIDIA and partners are transforming robotics. Thus, physical AI models are evolving from research prototypes into practical, deployable systems worldwide, setting the stage for the next generation of intelligent robots.