The debate over AI governance has intensified following recent federal proposals for a ten-year moratorium on state AI regulations. This preemptive approach threatens to replace emerging accountability mechanisms with a regulatory vacuum.
In his recent AI Frontiers article, Kevin Frazier argues in favor of a federal moratorium, seeing it as necessary to prevent fragmented state-level liability rules that would stifle innovation and disadvantage smaller developers. Frazier (an AI Innovation and Law Fellow at the University of Texas, Austin, School of Law) also contends that, because the norms of AI are still nascent, it would be premature to rely on existing tort law for AI liability. Frazier cautions that judges and state governments lack the technical expertise and capacity to enforce liability consistently.
But while Frazier raises important concerns about allowing state laws to assign AI liability, he understates both the limits of federal regulation and the unique advantages of liability. Liability represents the most suitable policy tool for addressing many of the most pressing risks posed by AI systems. Its superiority stems from three basic advantages. Specifically, liability can:
• Function effectively despite widespread disagreement about the likelihood and severity of risks
• Incentivize optimal rather than merely reasonable precautions
• Address third-party harms where market mechanisms fail to do so
Frazier correctly observes that “societal norms around AI are still forming, and the technology itself is not yet fully understood.” However, I believe he draws the wrong conclusion from this observation. The profound disagreement among experts, policymakers, and the public about AI risks and their severity does not argue against using liability frameworks to curb potential abuses. On the contrary, it renders their use indispensable.
Want to contribute to the conversation? Pitch your piece.
The disagreement about AI risks reflects more than differences in technical assessment. It also encompasses fundamental questions about the pace of AI development, the likelihood of catastrophic outcomes, and the appropriate balance between innovation and precaution. Some researchers argue that advanced AI systems pose high-probability and imminent existential threats, warranting immediate regulatory intervention. Others contend that such concerns are overblown, arguing that premature regulation could stifle beneficial innovation.
Such disagreement creates paralysis in traditional regulatory approaches. Prescriptive regulation designed to address risks before they become reality — known in legal contexts as “ex ante,” meaning “before the fact” — generally entails substantial up-front costs that increase as rules become stricter. Passing such rules requires social consensus about the underlying risks and the costs we’re willing to bear to mitigate them.
When expert opinions vary dramatically about foundational questions, as they do in the case of AI, regulations may emerge that are either ineffectively permissive or counterproductively restrictive. The political process, which tends to amplify rather than resolve such disagreements, provides little guidance for threading this needle effectively.
Approval-based systems face similar challenges. In an approval-based system (for example, Food and Drug Administration regulations of prescription drugs), regulators must formally approve new products and technologies before they can be used. Thus, they depend on regulators’ ability to distinguish between acceptable and unacceptable risks — a difficult task when the underlying assessments remain contested.
Liability systems, by contrast, operate effectively even amid substantial disagreements. They do not require ex ante consensus about appropriate risk levels; rather, they assign “ex post” accountability. Liability scales automatically with risk, as revealed in cases where individual plaintiffs suffer real injuries. This obviates the need for ex ante resolution of wide social disagreement about the magnitude of AI risks.
Thus, while Frazier and I agree that governments have limited expertise in AI risk management, this actually strengthens rather than undermines the case for liability, which harnesses private-sector expertise through market incentives rather than displacing it through prescriptive rules.
Frazier and I also share some common ground regarding the limits of negligence-based liability. Traditional negligence doctrine imposes a duty to exercise “reasonable care,” typically defined as the level of care that a reasonable person would exercise under similar circumstances. While this standard has served tort law well across many domains, AI systems present unique challenges that may render conventional reasonable care analysis inadequate for managing the most significant risks.
In practice, courts tend to engage in a fairly narrow inquiry when assessing whether a defendant exercised reasonable care. If an SUV driver runs over a pedestrian, courts generally do not inquire as to whether the net social benefits of this particular car trip justified the injury risk it generated for other road users. Nor would a court ask whether the extra benefits of driving an SUV (rather than a lighter-weight sedan) justified the extra risks the heavier vehicle posed to third parties. Those questions are treated as outside the scope of the reasonable care inquiry. Instead, courts focus on questions like whether the driver was drunk, or texting, or speeding.
In the AI context, I expect a similarly narrow negligence analysis that asks whether AI companies implemented well-established alignment techniques and safety practices. I do not anticipate questions about whether it was reasonable to develop an AI system with certain high-level features, given the current state of AI alignment and safety knowledge.
However, while negligence is limited in its ability to address broader upstream culpability, liability can still tackle it. Under strict liability, defendants internalize the full social costs of their activities. This structure incentivizes investment in precaution up to the point where marginal costs equal marginal benefits. Such an alignment between private and social incentives proves especially valuable when reasonable care standards may systematically underestimate the optimal level of precaution.
Another key feature of liability systems is their capacity to address third-party harms: situations where AI systems cause damage to parties who have no contractual or other market relationship with the system’s operator. These scenarios present classic market failure problems where private incentives diverge sharply from social welfare — warranting some sort of policy intervention.
When AI systems harm their direct users, market mechanisms provide some corrective pressure. Users who experience harms from AI systems can take their business to competitors, demand compensation, or avoid such systems altogether. While these market responses may be imperfect — particularly when harms are difficult to detect or when users face switching costs — they do provide an organic feedback mechanism, incentivizing AI system operators to invest in safety.
Third-party harms present an entirely different dynamic. In such cases, the parties bearing the costs of system failures have no market leverage to demand safer design or operation. AI developers, deployers, and users internalize the benefits of their activities — revenue from users, cost savings from automation, competitive advantages from AI capabilities — while externalizing many of the costs onto third parties. Without policy intervention, this leads to systematic underinvestment in safety measures that protect third parties.
Liability systems directly address this externality problem by compelling AI system operators to internalize the costs they impose on third parties. When AI systems harm people, liability rules require AI companies to compensate victims. This induces AI companies to invest in safety measures that protect third parties. AI companies themselves are best positioned to identify such measures, with the range of potential mitigations including high-level system architecture changes, investing more in alignment and interpretability research, and testing and red-teaming new models before deployment, potentially including broad internal deployment.
The power of this mechanism is clear when compared with alternative approaches to the problem of mitigating third-party harms. Prescriptive regulation might require regulators to identify appropriate risk-mitigation measures ex ante, a challenging task given the rapid evolution of AI technology. Approval-based systems might prevent the deployment of particularly risky systems, but they provide limited ongoing incentives for safety investment once systems are approved. Only liability systems create continuous incentives for operators to identify and implement cost-effective safety measures throughout the lifecycle of their systems.
Moreover, liability systems create incentives for companies to develop safety expertise that extends beyond compliance with specific regulatory requirements. Under prescriptive regulation, companies have incentives to meet specified requirements but little reason to exceed them. Under liability systems, companies have incentives to identify and address risks even when those risks are not explicitly anticipated by regulators. This creates a more robust and adaptive approach to safety management.
Frazier’s concerns about a patchwork of state-level AI regulation deserve serious examination, but his analysis overstates both the likelihood and the problematic consequences of such inconsistency. His critique conflates different types of regulatory requirements, while ignoring the inherent harmonizing features of liability systems.
First, liability rules exhibit greater natural consistency across jurisdictions than other forms of regulation do. Frazier worries about “ambiguous liability requirements” and companies needing to “navigate dozens of state-level laws.” However, the common-law tradition underlying tort law creates pressures toward harmonization that prescriptive regulations lack. Basic negligence principles — duty, breach, causation, and damages — remain remarkably consistent across states, despite the absence of a federal mandate.
More importantly, strict liability regimes avoid patchwork problems entirely. Under strict liability, companies bear responsibility for harm they cause, regardless of their precautionary efforts or the specific requirements they meet. This approach creates no compliance component that could vary across states. A company developing AI systems under a strict liability regime faces the same fundamental incentive everywhere: Make your systems safe enough to justify the liability exposure they create.
Frazier’s critique of Rhode Island Senate Bill 358, which I helped design, reflects some mischaracterization of its provisions. The bill is designed to close a gap in current law where AI systems may engage in wrongful conduct, yet no one may be liable.
Consider an agentic AI system that a user instructs to start a profitable internet business. The AI system determines that the easiest way to do this is to send out phishing emails and steal innocent people’s identities. It also covers its tracks, so reasonable care on the part of the user would neither prevent nor detect this activity. In such a case, current Rhode Island law would require the innocent third-party plaintiffs to prove that the developers failed to adopt some specific precautionary measure that would have prevented the injury, which may not be possible.
Under SB 358, it would be sufficient for the plaintiff to prove that the AI system’s conduct would be a tort if a human engaged in it, and that neither the user nor an intermediary that fine-tuned or scaffolded the model had intended or could have reasonably foreseen the system’s tortious conduct. That is, the bill holds that when AI systems wrongfully harm innocent people, someone should be liable. If the user and any intermediaries that modified the system are innocent, the buck should stop with the model developer.
One concern with this approach is that the elements of some torts implicate the mental states of the defendant, and many people doubt that AI systems can be understood as having any mental states at all. For this reason, SB 358 creates a rebuttable presumption that, in cases where the judge or jury would infer that a human possessed the relevant mental state if they engaged in conduct similar to that of the AI system, then that same inference should also apply to AI mental states.
/odw-inline-subscribe-cta
While state-level AI liability represents a significant improvement over the current regulatory vacuum, I do think there is an argument for federalizing AI liability rules. Alternatively, more states could adopt narrow, strict liability legislation (like Rhode Island SB 358) that would help close the current AI accountability gap.
A federal approach could provide greater consistency and reflect the national scope of AI system deployment. Federal legislation could also more easily coordinate liability rules with other aspects of AI governance, such as liability insurance requirements, safety testing requirements, disclosure obligations, and government procurement standards.
However, the case for federalization is not an argument against liability as a policy tool. Whether implemented at the state level or the federal level, liability systems offer unique advantages for managing AI risks that other regulatory approaches cannot match. The key insight is not that liability must be federal to be effective, but rather that liability — at whatever level — represents a superior approach to AI governance than either prescriptive regulation or approval-based systems.
Frazier’s analysis culminates in support for federal preemption of state-level AI liability, noting that the US House reconciliation bill includes “a 10-year moratorium on a wide range of state AI regulations.” But this moratorium would replace emerging state-level accountability mechanisms with no accountability at all.
The proposed 10-year moratorium would leave two paths for responding to AI risks. One path would be for Congress to pass federal legislation. Confidence in such a development would be misplaced given Congress’s track record on technology regulation.
The second path would be to accept a regulatory vacuum where AI risks remain entirely unaddressed through legal accountability mechanisms. Some commentators (I’m not sure if Frazier is among them) actively prefer this laissez-faire scenario to a liability-based governance framework, claiming that it best promotes innovation to unlock the benefits of AI. This view is deeply mistaken. Concerns that liability will chill innovation are overstated. If AI holds the promise that Frazier and I think it does, there will still be very strong incentives to invest in it, even after developers fully internalize the technology’s risks.
What we want to promote is socially beneficial innovation that does more good than harm. Making AI developers pay when their systems cause harm balances their incentives and advances this larger goal. (Similarly, requiring companies to pay for the harms of pollution makes sense, even when that pollution is a byproduct of producing useful goods or services like electricity, steel, or transportation.)
In a world of deep disagreement about AI’s risks and benefits, abandoning emerging liability mechanisms risks creating a dangerous regulatory vacuum. Liability’s unique abilities — adapting dynamically, incentivizing optimal safety investments, and addressing third-party harms — makes it indispensable. Whether at the state level or the federal level, liability frameworks should form the backbone of any effective AI governance strategy.
See things differently? AI Frontiers welcomes expert insights, thoughtful critiques, and fresh perspectives. Send us your pitch.
AI is poised to leave a lot of us unemployed. We need to rethink social welfare.
Autonomous AI-enabled organizations are increasingly plausible. They would fundamentally break the way we regulate the economy.