Articles in this section explore if, when, and how to implement regulation that harnesses AI's benefits while limiting its risks.
The Code provides a powerful incentive to push frontier developers toward measurably safer practices.
Six years of export restrictions have given the U.S. a commanding lead in key dimensions of the AI competition — but it’s uncertain if the impact of these controls will persist.
Placing AI in a nuclear framework inflates expectations and distracts from practical, sector-specific governance.
Congress is weighing a measure that would nullify thousands of state AI rules and bar new ones — upending federalism and halting the experiments that drive smarter policy.
Designed to protect human creativity, copyright law is under pressure from generative AI. Some experts question whether it has a future.
Abandoning liability mechanisms risks creating a dangerous regulatory vacuum.
In the absence of federal legislation, the burden of managing AI risks has fallen to judges and state legislators — actors lacking the tools needed to ensure consistency, enforceability, or fairness.
US lawmakers propose a new system to check where chips end up.
Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions.
Corporate capture of AI research—echoing the days of Big Tobacco—thwarts sensible policymaking.
Continued sales of advanced AI chips allow China to deploy AI at massive scale.
Realizing AI’s full potential requires designing for opportunity—not just guarding against risk.
AI risk may have unique elements, but there is still a lot to be learned from cybersecurity, enterprise, financial, and environmental risk management.
Autonomous systems are being rapidly deployed, but governance efforts are still in their infancy.