
In late 2025, Figure AI placed its third-generation humanoid robot into real homes for alpha testing. The Figure 03 has hands with 16 degrees of freedom, tactile sensors that detect forces as small as three grams, and foam-padded limbs designed for safe operation around people. It charges wirelessly and responds to natural language commands. It can learn in real time, adapting to its environment.
A few months earlier, Unitree started shipping the R1, a home-capable robot priced at $4,900. TIME included it among the Best Inventions of 2025. You can order one today and have it in weeks: it’s the most commercially accessible humanoid robot on the planet.
These robots have graduated from prototyping. They’re consumer products with price tags, shipping dates, and marketing campaigns. It’s easy to imagine a world in which every family relies on one or several robots to conduct daily life, especially as AI becomes more capable. But what rules govern a learning, physically capable, always-on AI device operating inside someone’s home?
Unfortunately, we’re far from a coherent answer. Existing US regulations were developed with Roombas and robot arms in mind, not autonomous humanoids, resulting in a confusing patchwork of obligations. That doesn’t mean the situation is hopeless, just that regulators must act quickly to establish reasonable standards for a generational technology. That work should start now, and not after the first serious home-robot injury, not after a data breach exposes 3D maps of thousands of homes, and not after a liability lawsuit reveals that no one can legally be held responsible.
To understand just how unprepared we are to govern next-generation robots, consider the trajectory that robotics development and adoption is taking.
Robotics is a rapidly expanding market. Goldman Sachs revised the total addressable market for humanoid robots to $38 billion by 2035. This is a sixfold increase from its prior estimate. The companies behind these projections are well-capitalized and have real products. Figure AI has raised over $1.75 billion at a $39 billion valuation. Boston Dynamics and Agility Robotics are already piloting commercial deployments at automotive and logistics facilities.

Robotics foundation models are improving rapidly. The technical capability curve matters as much as the business investment. Foundation models such as Vision-Language-Action (VLA) models now enable robots to respond to vague instructions, generalize across tasks, and make autonomous decisions in situations their programmers never anticipated or designed them for. Nvidia’s GR00T N models and Figure’s Helix represent a new generation of robot intelligence that can improve across frequent updates.
Household integration is on the horizon. By 2030, the realistic picture looks something like this: tens of thousands of humanoid robots operating in factories and warehouses, with the first wave of consumer-grade units entering homes. They’ll fold laundry, load dishwashers, assist elderly family members, and navigate living rooms alongside children and pets. They’ll be connected to the internet, continuously collecting visual, audio, and spatial data. And they’ll be governed by a regulatory patchwork that was never designed for them.
The ceiling of what the technology will ultimately achieve is debatable, but what matters is that meaningful numbers of these devices will populate homes within the next few years, yet the regulatory framework consumers need is far behind what they or their policymakers realize.
Consider the challenge facing a compliance officer at a company that’s preparing to bring an AI-enabled home robot to the US market. The robot has arms, hands, cameras, and microphones, and it runs a foundation model. What does the regulatory landscape look like for such a product?
/inline-pitch-cta
Unfortunately, it looks fragmented, with numerous overlaps and gaps.
Outdated consumer safety frameworks. The compliance officer’s first stop is the Consumer Product Safety Commission (CPSC), the primary federal agency with authority over consumer product safety. Its frameworks are intended for products that remain substantially unchanged after leaving the factory. Yet the CPSC’s own reporting acknowledges that AI-embedded products learn from consumers after purchase, making traditional premarket testing insufficient and post-purchase hazards unpredictable. CPSC professionals have been aware of this problem for years. The agency hosts forums on AI-enabled products and previews pivots toward AI-driven hazard detection, but it has not finalized any mandatory standard for AI-enabled consumer robots.
Existing privacy laws are ill-suited for robots. Now consider the added privacy concerns. An always-on robot with cameras, microphones, and navigation sensors in a private residence creates a data-collection profile unlike anything current privacy laws were designed to address. California’s Consumer Privacy Act (CCPA) and Privacy Rights Act (CPRA) trigger extensive obligations around visual, audio, and biometric data. Illinois’s Biometric Information Privacy Act (BIPA) requires informed written consent before collecting biometric identifiers, with statutory damages of $1,000 to $5,000 per violation. However, BIPA was designed to govern apps and kiosks, not autonomous machines in private homes.
Similarly, the Federal Trade Commission’s (FTC) Children’s Online Privacy Protection Rule requires verifiable parental consent for passive data collection from children under 13. This creates a compliance challenge that borders on the absurd for an always-on device that can’t reliably distinguish between an adult’s face and a child’s face in every moment of operation.

Fragmented state AI regulations. A company selling nationwide must navigate 15 (and counting) different states’ privacy frameworks, none of which were designed for continuous visual and audio monitoring, spatial mapping that creates detailed 3D models of homes, behavioral pattern analysis, or consent challenges when guests enter the home.
For example, Colorado’s AI Act, effective June 30, 2026, requires developers of high-risk AI systems to implement risk-management programs and conduct impact assessments. California’s Transparency in Frontier AI Act became effective January 1, 2026. Texas signed its Responsible Artificial Intelligence Governance Act (TRAIGA) in June 2025. Each has different requirements, definitions, and enforcement mechanisms. Meanwhile, the federal government sends mixed signals. President Trump’s Executive Order 14365 established a Justice Department task force to challenge state AI laws, but an attempted 10-year moratorium on state AI laws was defeated in the Senate.
The industry suffers from unclear product liability. Perhaps the most consequential gap is in product liability. When a home robot causes an injury, who’s liable? The hardware manufacturer? The foundation model developer? The system integrator? The robot’s owner, who gave a vague command? Existing product-liability doctrines assume clear human oversight and products that behave predictably. AI systems with varying degrees of autonomy break both assumptions. Some legal scholars have proposed tiered approaches, but these require voluntary safety frameworks to exist in mature form; they don’t, for embodied AI.
Incomplete technical standards. The standards landscape reflects the same pattern of partial coverage. ISO 13482, updated in 2025, addresses personal-care robots but predates modern AI capabilities like foundation models. ISO 25785-1, the first international safety standard for bipedal robots, covers only industrial workplace use. IEC 62443 applies to networked industrial robots from a cybersecurity perspective but wasn’t designed for always-on consumer devices in private spaces.
Overlapping jurisdictions complicate matters. No single agency has comprehensive authority. And no single framework addresses the full range of risks, including physical safety, data privacy, AI decision-making, and cybersecurity. The gaps between these overlapping regulatory spaces are precisely where the real risks live. The compliance burden reflects the genuine complexity of a product that is simultaneously a consumer device, a physically capable machine, an autonomous AI system, and an always-on data collector.
Proven regulatory templates do exist. However, the situation is far from hopeless, with an established playbook of regulation for emerging technology that can inspire effective regulation for robotics. The National Institute of Science and Technology’s (NIST) AI Risk Management Framework is the most globally influential risk-management framework for AI. The Food and Drug Administration’s (FDA) Software as a Medical Device guidelines demonstrate that life-cycle-based AI regulation for physical products is feasible, while its Total Product Life Cycle approach recognizes that AI software evolves over time, with predetermined change control plans that allow manufacturers to get preapproval for categories of changes. The Federal Aviation Administration’s (FAA) graduated, risk-based approach to drone regulation provides a successful template for governing autonomous physical systems. The US can apply what it already does well to the specific challenge of embodied AI.
So what would coherent US governance for embodied AI actually look like? What follows is not a finished proposal. However, it does detail the foundation that any serious approach would need.
/odw-inline-subscribe-cta
Risk-tiering by physical capability and autonomy level. Not every robot presents the same risk, and each level should carry proportionate requirements. Reasonable tiers include:
Purpose-built data governance. Privacy laws like California’s CCPA/CPRA and Illinois’s BIPA were designed for websites and apps, not physical devices operating in shared spaces. Their core principles (like user consent, data minimization, and the right to deletion) are sound, but need to be adapted for robots that move through homes and workplaces, collect data from multiple people simultaneously, and operate continuously.
Tiered liability. Compliance with established safety frameworks should be rewarded with a negligence standard for liability, while noncompliance should trigger strict liability. That creates the right incentive structure, but it works only if we define what “compliance” means for each risk tier.
Incident reporting. The US has proven templates for effective incident reporting. The Occupational Safety and Health Administration (OSHA) requires work-related fatalities to be reported within 8 hours; the Cybersecurity and Infrastructure Security Agency’s Cyber Incident Reporting for Critical Infrastructure Act requires that significant cyber incidents be reported within 72 hours; the FDA’s MedWatch system captures device malfunctions. An embodied AI reporting framework would need to cover physical injuries, near misses, privacy violations, unexpected autonomous behaviors, and cybersecurity incidents. But, under current law, it's unclear which agency should receive, analyze, and act on such reports.
Premarket assessment. The FDA’s Total Product Life Cycle approach, with its predetermined change control plans for learning software, offers a directly transferable model. Robots with preapproved change control plans for AI model updates would address the fundamental challenge that AI-enabled products evolve after sale.
Coordination. CPSC, FTC, NIST, OSHA, state attorneys general, and state privacy regulators all have partial jurisdiction; none has comprehensive authority. Whether through designating a lead agency or creating something like a “National Robotics Safety Board,” there needs to be a single point of accountability.
Proactive standards development. The FAA’s experience with drones is instructive. FAA Part 107 created workable rules that enabled the commercial drone industry to grow while maintaining safety. Proactive engagement now could compress the typical five-to-seven-year regulatory stabilization timeline and avoid the reactive, incident-driven pattern that has characterized previous rounds of safety regulation, endangering consumers.
None of this is easy. A risk-tiering approach requires drawing lines that will feel arbitrary at the margins. Unified data-governance standards could slow time to market. Premarket assessments add costs that could disadvantage US companies against international competitors facing less regulatory pressure. The current administration’s deregulatory posture makes new federal frameworks politically unlikely in the near term.
But the products are coming, regardless of whether the governance is ready. The question isn’t whether embodied AI needs governance; it’s whether we build it proactively or reactively, after the incidents that make national headlines.
Many building blocks for meaningful regulation already exist. What’s missing is their integration into a coherent framework purpose-built for products that are simultaneously consumer devices, physically capable machines, autonomous AI decision-makers, and always-on data collectors in private spaces. That work should begin now, while there is still time to get the regulatory architecture right.
See things differently? AI Frontiers welcomes expert insights, thoughtful critiques, and fresh perspectives. Send us your pitch.

AI is already taking jobs, but that is only one facet of its complex economic effects. Price dynamics and bottlenecks indicate that automation could be good news for workers — but only if it vastly outperforms them.

Shaped by a different economic environment, China’s AI startups are optimizing for different customers than their US counterparts — and seeing faster industrial adoption.