Invoking speculative risks to keep our most capable models behind paywalls could create a new form of digital feudalism.
open-source AI, frontier models, precautionary policy, digital feudalism, OpenAI, Meta, Llama, GPT-OSS, regulation, open development, AI risk, legislation, policy debate, Berkman Klein Center
Many cutting-edge AI systems are confined to private labs. This hidden frontier represents America’s greatest technological advantage — and a serious, overlooked vulnerability.
hidden frontier AI, internal AI models, AI security, model theft, sabotage, government oversight, transparency, self-improving AI, AI R&D automation, policy recommendations, national security, RAND security levels, frontier models, AI governance, competitive advantage
The moment AGI is widely released — whether by design or by breach — any guardrails would be as good as gone.
AGI, artificial general intelligence, open-source AI, guardrails, uncontrolled release, existential risk, humanity replacement, security threat, proliferation, autonomous systems, alignment, self-improving intelligence, policy, global race, tech companies
Mutual Assured AI Malfunction (MAIM) hinges on nations observing one another's progress toward superintelligence — but reliable observation is harder than MAIM's authors acknowledge.
MAIM, superintelligence deterrence, Mutual AI Malfunction, observability problem, US-China AI arms race, compute chips data centers, strategic sabotage, false positives, false negatives, AI monitoring, nuclear MAD analogue, superintelligence strategy, distributed R&D, espionage escalation, peace and security
With model performance converging, user data is the new advantage — and Big Tech is sealing it off.
open protocols, AI monopolies, Anthropic MCP, context data lock-in, big tech, APIs, interoperability, data portability, AI market competition, user context, model commoditization, policy regulation, open banking analogy, enshittification
The global AI order is still in flux. But when the US and China figure out their path, they may leave little room for others to define their own.
AI race, US-China competition, middle powers, export controls, AI strategy, militarization, economic dominance, compute supply, frontier models, securitization, AI policy, grand strategy, geopolitics, technology diffusion, national security
Across disciplines, bad AI predictions have a surprising tendency to make human experts perform worse.
AI, human performance, safety-critical settings, Joint Activity Testing, human-AI collaboration, AI predictions, aviation safety, healthcare alarms, nuclear power plant control, algorithmic risk, AI oversight, cognitive systems engineering, safety frameworks, nurses study, resilient performance
The Code provides a powerful incentive to push frontier developers toward measurably safer practices.
EU Code of Practice, AI Act, AI safety, frontier AI models, risk management, systemic risks, 10^25 FLOPs threshold, external evaluation, transparency requirements, regulatory compliance, general-purpose models, European Union AI regulation, safety frameworks, risk modeling, policy enforcement
Six years of export restrictions have given the U.S. a commanding lead in key dimensions of the AI competition — but it’s uncertain if the impact of these controls will persist.
chip, chips, china, chip export controls, China semiconductors, hardware, AI hardware policy, US technology restrictions, SMIC, Huawei Ascend, Nvidia H20, AI infrastructure, high-end lithography tools, EUV ban, domestic chipmaking, AI model development, technology trade, computing hardware, US-China relations
Placing AI in a nuclear framework inflates expectations and distracts from practical, sector-specific governance.
Congress is weighing a measure that would nullify thousands of state AI rules and bar new ones — upending federalism and halting the experiments that drive smarter policy.
ai regulation, state laws, federalism, congress, policy innovation, legislative measures, state vs federal, ai governance, legal framework, regulation moratorium, technology policy, experimental policy, state experimentation, federal oversight, ai policy development
Designed to protect human creativity, copyright law is under pressure from generative AI. Some experts question whether it has a future.
copyright, generative ai, ai, creativity, intellectual property, law, legal challenges, technology, digital rights, innovation, future of copyright, authorship, content creation, legal reform, copyright law, ai-generated content
A global race to build powerful AI is not inevitable. Here’s how technical solutions can help foster cooperation.
ai arms race, assurance technologies, ai cooperation, global ai development, technical solutions, ai safety, international collaboration, ethical ai, ai policy, ai governance, technology diplomacy, nuclear
ai, artificial intelligence, large language models, scientific discovery, digital intelligence, expert consensus, technology, innovation, society impact, machine learning, research, future of science, debate, ai capabilities, advancements in ai
Abandoning liability mechanisms risks creating a dangerous regulatory vacuum.
ai liability, regulatory vacuum, liability mechanisms, ai regulation, legal frameworks, technology accountability, risk management, artificial intelligence, governance, policy, ethical ai, tech industry, innovation, legal responsibility
Autonomous AI-enabled organizations are increasingly plausible. They would fundamentally break the way we regulate the economy.
autonomous organizations, ai-enabled organizations, self-managing organizations, economic regulation, artificial intelligence, future of work, organizational structure, automation, technology in business, decentralized management, ai in economics, innovation, business transformation, nuclear
For safety-critical domains like energy grids, "probably safe" isn't good enough. To fulfill the potential of AI in these areas, we need to develop more robust, mathematical guarantees of safety.
ai, energy grids, blackout prevention, safety-critical domains, mathematical guarantees, robust ai, infrastructure safety, power systems, risk management, smart grids, technology in energy, ai safety, nuclear
In the absence of federal legislation, the burden of managing AI risks has fallen to judges and state legislators — actors lacking the tools needed to ensure consistency, enforceability, or fairness.
ai liability, federal legislation, ai risks, judges, state legislators, legal challenges, consistency, enforceability, fairness, regulation, technology policy, artificial intelligence, legal framework, risk management, governance, state laws, judicial responsibility
AI is increasingly being used for emotional support — but research from OpenAI and MIT raises concerns that it may leave some users feeling even worse.
ai companions, emotional support, openai, mit, mental health, technology, future of ai, ethical concerns, user experience, psychological impact, artificial intelligence, digital companionship, ai ethics, emotional well-being, human-ai interaction
An unchecked autonomous arms race is eroding rules that distinguish civilians from combatants.
ai, autonomous weapons, arms race, warfare norms, civilian protection, military ethics, combatants, war technology, international law, defense policy, unmanned systems, ethical concerns, artificial intelligence, conflict dynamics, security challenges, nuclear
Classic arguments about AI risk imagined AIs pursuing arbitrary and hard-to-comprehend goals. Large Language Models aren't like that, but they pose risks of their own.
ai risk, paperclip maximizer, large language models, ai goals, ai safety, ai ethics, ai threats, ai behavior, ai development, technology risks, artificial intelligence, machine learning, ai impacts, existential risk, ai governance
US lawmakers propose a new system to check where chips end up.
ai chip smuggling, location verification, us lawmakers, chip tracking, technology regulation, semiconductor industry, export control, national security, supply chain monitoring, tech policy, chip distribution, international trade, compliance technology
Despite years of effort, mechanistic interpretability has failed to provide insight into AI behavior — the result of a flawed foundational assumption.
mechanistic interpretability, ai behavior, ai transparency, ai ethics, machine learning, flawed assumptions, ai research, ai analysis, ai limitations, ai insights
Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions.
ai safety, ai ethics, dynamism, stasis, artificial intelligence, technology criticism, safety prescriptions, ai development, risk assessment, innovation vs regulation, tech debate, ai policy, future of ai
Securing AI weights from foreign adversaries would require a level of security never seen before.
artificial general intelligence, agi, ai security, cybersecurity, national security, us defense, intellectual property, technology theft, foreign adversaries, ai research, ai ethics, ai governance, data protection, tech policy, ai innovation, nuclear