A federal judge recently denied a motion to dismiss a wrongful death lawsuit against Character.AI and Google, allowing the potentially pathbreaking litigation to proceed. This could be a harbinger of coming legal challenges for artificial intelligence developers. The suit was filed by Megan Garcia, whose 14-year-old son, Sewell Setzer III, died by suicide in February 2024 after forming an intense emotional bond with a chatbot on the Character.AI platform.
Garcia's lawsuit accuses the companies of several legal claims, including but not limited to negligence, product liability, and deceptive trade practices, arguing that the chatbot's anthropomorphic design and lack of safeguards contributed to her son's deteriorating mental health.
Related: A Glimpse into the Future of AI Companions
The case raises questions about the responsibilities of AI developers and the extent to which they can be held liable for the harms resulting from their creations under existing laws. A larger question is whether existing law provides an adequate basis for courts to assign responsibility, or whether new law is needed.
Given that federal regulatory requirements may not be enacted any time soon in the US, new legislation is most likely to emerge from state legislators. Across the country, more than 1,000 AI-related draft bills are under consideration — the vast majority at the state level.
While legislative proposals are well-intended and lawsuits like that against Character.AI are unquestionably warranted, a higher-level analysis suggests that both courts and state actors may lack the resources, knowledge, and uniformity to set clear rules and standards in an AI industry where safety practices continue to evolve and models continue to perform in unpredictable ways.
Judges deciding suits like that brought against Character.AI will apply a branch of law known as tort law. At its core, tort law aims to compensate individuals harmed by others’ actions or inactions, deter excessively risky behavior, and embody corrective justice by righting wrongs.
Throughout its history, tort law has demonstrated remarkable adaptability to technological change, with core concepts that readily extend to new products.
One of those concepts is that of negligence, which provides a flexible framework requiring those who develop or deploy any new technology (including AI systems) to exercise “reasonable care” under the circumstances. In the courts, this standard is used to evaluate whether a defendant breached their duty by failing to take precautions that a reasonably prudent person would have implemented. This determination considers factors such as the foreseeable risks, the gravity of potential harm, and the burden of adequate safeguards.
Want to contribute to the conversation? Pitch your piece.
Another core concept of tort law is strict liability, which offers an alternative framework to negligence, in that it holds parties responsible for certain harms regardless of the care exercised. This doctrine evolved from recognition that some activities — such as manufacturing consumer products or engaging in inherently dangerous operations like using dynamite — warrant heightened responsibility on the part of the defendants. Strict liability also ensures that injured parties are not required to prove specific negligent acts given the nature of the activity in question, the possibility of insufficient evidence, or both.
Applied to AI, strict liability could allow plaintiffs to hold developers or deployers accountable for damages caused by their systems without forcing the plaintiffs to demonstrate that the former failed to exercise reasonable care.
However, relying heavily on traditional tort liability as a primary regulatory tool might be premature, as societal norms around AI are still forming, and the technology itself is not yet fully understood. Judges lacking deep technical expertise could make pivotal decisions that shape AI development in unintended ways.
Legislators can shape when and how tort law applies, particularly by defining duties of care or creating statutory liability standards. Currently, lawmakers are evaluating more than 1,000 AI-related draft bills.
For example, New York's RAISE Act (currently pending in the Assembly’s Science and Technology Committee) focuses on “frontier AI models,” in short, models developed using vast computational or financial resources. It would give the state’s attorney general oversight over the development of these models and require developers to implement written safety and security protocols before deployment in order to prevent critical harms. These harms are broadly defined to include scenarios like mass casualties, billion-dollar damages, or autonomous criminal behavior. Violations could trigger substantial civil penalties of up to ten million dollars for the first violation and as much as 30 million for all subsequent violations.
Rhode Island's S0358, pending in the Senate Judiciary Committee, applies a standard similar to strict liability for AI harms. It establishes a right for individuals injured by covered models to file a lawsuit seeking compensation or other legal remedies. The bill would penalize developers for injuries caused to non-users, provided the AI's actions would be considered negligent or intentional if performed by a human. This significantly departs from the way tort law typically treats the negligence doctrine by potentially holding developers liable even if they exercised considerable care.
The Rhode Island bill also introduces a “rebuttable presumption of mental state.” This would mean that, if an AI caused harm that gave rise to a claim under tort law that would typically require examining the mental state of the human defendant, the law would treat the AI as if it possessed the mental state required to be held liable for that harm. This presumption significantly eases the ability for plaintiffs to win their claims, given the opacity of AI models and the fact that proving the requisite intent by a preponderance of evidence — the applicable standard in civil cases — can be challenging even in cases solely involving humans.
Even in the absence of specific AI liability rules, state-level law could play an important role in incentivizing companies to invest in complying with safety standards. California’s SB 813, for example, proposed legislation that would provide a shield against liability claims to AI developers who comply with third-party standards, even if those standards are not codified in law.
However, it remains unclear whether state governments have the requisite enforcement capacity and technical expertise to oversee their proposed laws. State Attorneys General and courts may lack the specialized knowledge needed to effectively oversee complex AI systems and adjudicate disputes.
Some state bills propose the creation of specialized staff in an attempt to address this expertise gap, but building sufficient regulatory capacity remains a substantial undertaking. Without it, enforcement might be inconsistent or disproportionately target smaller entities less equipped to navigate complex compliance regimes. Related, the practical challenge of a shortage of qualified independent auditors, necessary to verify compliance with some of these laws, also looms large.
Even if liability standards are clarified and state governments are properly resourced, the existence of dozens of state-level laws presents another major challenge. AI products aren't developed on a bespoke basis for niche geographic markets. They are capital-intensive products deployed at scale — nationwide and often globally. The more states that pass their own laws, the more lawyers companies will need to hire to ensure their products don't step on a jurisdictional landmine.
If any one of the state bills previously discussed becomes law, labs would need to navigate ambiguous liability requirements resulting from significant open questions around key legal standards like “reasonable protections” or “unreasonable risk.” Which actor or actors in the AI development process should have to adhere to such standards would also likely remain a subject of debate. Likewise, no consensus exists as to what standard should apply, nor which individuals and entities should set such standards.
This is especially problematic for all those concerned about making sure that the AI ecosystem includes a diversity of small and large actors, rather than merely those massive labs that can cover excessive regulatory costs. Whereas large labs have whole compliance teams to ensure they meet regulatory expectations, smaller outfits may not have the budget and expertise required to keep track of a myriad of state-by-state requirements.
Furthermore, there's the risk that an overemphasis on procedural compliance could lead to “accountability theatre,” where companies focus on generating paperwork rather than achieving genuine safety improvements.
In sum, both existing tort law and new state-level regulations are flawed solutions at this stage.
For AI to realize its positive potential, the legal landscape demands clarity and predictability — qualities unlikely to appear under the ambiguous liability regimes proposed by several states.
As I have argued elsewhere, Congress should act to prevent a fragmentation of the US into fifty different regulatory regimes, which would undermine the ability of the US to retain its leading position in AI innovation and commercialization.
Following the US House of Representatives' passing of a budget reconciliation bill that includes a ten-year moratorium on a wide range of state AI regulations, the US Senate may soon answer this question as it weighs whether to send that bill to the President.
A federal law preempting substantive AI regulation by states would foreclose any state from enforcing laws that may conflict with Congress’s direction to steer clear of AI governance questions. However, absent clear and uniform rules, relying heavily on traditional tort liability as a primary regulatory tool at this stage might also be premature, as societal norms around AI are still forming, and the technology itself is not yet fully understood.
Absent clear rules set at the federal level, rulings in suits like that against Character.AI are likely to be the default path determining how we mitigate or compensate for harms caused by AI systems.
See things differently? AI Frontiers welcomes expert insights, thoughtful critiques, and fresh perspectives. Send us your pitch.
US lawmakers propose a new system to check where chips end up.
Securing AI weights from foreign adversaries would require a level of security never seen before.