Figure AI finally revealed on Thursday the "major breakthrough" that led the buzzy robotics startup to break ties with one of its investors, OpenAI: A novel dual-system AI architecture that allows robots to interpret natural language commands and manipulate objects they’ve never seen before—without needing specific pre-training or programing for each one.
Unlike conventional robots that require extensive programming or demonstrations for each new task, Helix combines a high-level reasoning system with real-time motor control. Its two systems effectively bridge the gap between semantic understanding (knowing what objects are) and action or motor control (knowing how to manipulate those objects).
This will make it possible for robots to become more capable over time without having to update their systems or train on new data. To demonstrate how it works, the company released a video showing two Figure robots working together to put away groceries, with one robot handing items to another that places them in drawers and refrigerators.
Figure claimed that neither robot knew about the items they were dealing with, yet they were capable of identifying which ones should go in a refrigerator and which ones are supposed to be stored dry.
"Helix can generalize to any household item," Adcock tweeted. "Like a human, Helix understands speech, reasons through problems...