Across disciplines, bad AI predictions have a surprising tendency to make human experts perform worse.
AI, human performance, safety-critical settings, Joint Activity Testing, human-AI collaboration, AI predictions, aviation safety, healthcare alarms, nuclear power plant control, algorithmic risk, AI oversight, cognitive systems engineering, safety frameworks, nurses study, resilient performance
The Code provides a powerful incentive to push frontier developers toward measurably safer practices.
EU Code of Practice, AI Act, AI safety, frontier AI models, risk management, systemic risks, 10^25 FLOPs threshold, external evaluation, transparency requirements, regulatory compliance, general-purpose models, European Union AI regulation, safety frameworks, risk modeling, policy enforcement
Six years of export restrictions have given the U.S. a commanding lead in key dimensions of the AI competition — but it’s uncertain if the impact of these controls will persist.
chip, chips, china, chip export controls, China semiconductors, hardware, AI hardware policy, US technology restrictions, SMIC, Huawei Ascend, Nvidia H20, AI infrastructure, high-end lithography tools, EUV ban, domestic chipmaking, AI model development, technology trade, computing hardware, US-China relations
Placing AI in a nuclear framework inflates expectations and distracts from practical, sector-specific governance.
Congress is weighing a measure that would nullify thousands of state AI rules and bar new ones — upending federalism and halting the experiments that drive smarter policy.
ai regulation, state laws, federalism, congress, policy innovation, legislative measures, state vs federal, ai governance, legal framework, regulation moratorium, technology policy, experimental policy, state experimentation, federal oversight, ai policy development
Designed to protect human creativity, copyright law is under pressure from generative AI. Some experts question whether it has a future.
copyright, generative ai, ai, creativity, intellectual property, law, legal challenges, technology, digital rights, innovation, future of copyright, authorship, content creation, legal reform, copyright law, ai-generated content
A global race to build powerful AI is not inevitable. Here’s how technical solutions can help foster cooperation.
ai arms race, assurance technologies, ai cooperation, global ai development, technical solutions, ai safety, international collaboration, ethical ai, ai policy, ai governance, technology diplomacy, nuclear
ai, artificial intelligence, large language models, scientific discovery, digital intelligence, expert consensus, technology, innovation, society impact, machine learning, research, future of science, debate, ai capabilities, advancements in ai
Abandoning liability mechanisms risks creating a dangerous regulatory vacuum.
ai liability, regulatory vacuum, liability mechanisms, ai regulation, legal frameworks, technology accountability, risk management, artificial intelligence, governance, policy, ethical ai, tech industry, innovation, legal responsibility
Autonomous AI-enabled organizations are increasingly plausible. They would fundamentally break the way we regulate the economy.
autonomous organizations, ai-enabled organizations, self-managing organizations, economic regulation, artificial intelligence, future of work, organizational structure, automation, technology in business, decentralized management, ai in economics, innovation, business transformation, nuclear
For safety-critical domains like energy grids, "probably safe" isn't good enough. To fulfill the potential of AI in these areas, we need to develop more robust, mathematical guarantees of safety.
ai, energy grids, blackout prevention, safety-critical domains, mathematical guarantees, robust ai, infrastructure safety, power systems, risk management, smart grids, technology in energy, ai safety, nuclear
In the absence of federal legislation, the burden of managing AI risks has fallen to judges and state legislators — actors lacking the tools needed to ensure consistency, enforceability, or fairness.
ai liability, federal legislation, ai risks, judges, state legislators, legal challenges, consistency, enforceability, fairness, regulation, technology policy, artificial intelligence, legal framework, risk management, governance, state laws, judicial responsibility
AI is increasingly being used for emotional support — but research from OpenAI and MIT raises concerns that it may leave some users feeling even worse.
ai companions, emotional support, openai, mit, mental health, technology, future of ai, ethical concerns, user experience, psychological impact, artificial intelligence, digital companionship, ai ethics, emotional well-being, human-ai interaction
An unchecked autonomous arms race is eroding rules that distinguish civilians from combatants.
ai, autonomous weapons, arms race, warfare norms, civilian protection, military ethics, combatants, war technology, international law, defense policy, unmanned systems, ethical concerns, artificial intelligence, conflict dynamics, security challenges, nuclear
Classic arguments about AI risk imagined AIs pursuing arbitrary and hard-to-comprehend goals. Large Language Models aren't like that, but they pose risks of their own.
ai risk, paperclip maximizer, large language models, ai goals, ai safety, ai ethics, ai threats, ai behavior, ai development, technology risks, artificial intelligence, machine learning, ai impacts, existential risk, ai governance
US lawmakers propose a new system to check where chips end up.
ai chip smuggling, location verification, us lawmakers, chip tracking, technology regulation, semiconductor industry, export control, national security, supply chain monitoring, tech policy, chip distribution, international trade, compliance technology
Despite years of effort, mechanistic interpretability has failed to provide insight into AI behavior — the result of a flawed foundational assumption.
mechanistic interpretability, ai behavior, ai transparency, ai ethics, machine learning, flawed assumptions, ai research, ai analysis, ai limitations, ai insights
Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions.
ai safety, ai ethics, dynamism, stasis, artificial intelligence, technology criticism, safety prescriptions, ai development, risk assessment, innovation vs regulation, tech debate, ai policy, future of ai
Securing AI weights from foreign adversaries would require a level of security never seen before.
artificial general intelligence, agi, ai security, cybersecurity, national security, us defense, intellectual property, technology theft, foreign adversaries, ai research, ai ethics, ai governance, data protection, tech policy, ai innovation, nuclear
AI Frontiers spoke with leading researchers and a CEO building AI agents to explore how AI will reshape work—and whether the jobs of the future are ones we’ll actually want.
ai, future of work, automation, ai companies, job transformation, ai researchers, ai agents, workplace innovation, employment trends, ai impact, digital workforce, technology and jobs, ai in business, ai ceo, ai frontier
President Trump vowed to be a peacemaker. Striking an “AI deal” with China could define global security and his legacy.
america first, safety first, president trump, peacemaker, china, ai safety, global security, international relations, diplomacy, legacy, artificial intelligence, us-china relations, geopolitics, technology policy, nuclear
New research shows frontier models outperform human scientists in troubleshooting virology procedures—lowering barriers to the development of biological weapons.
ai, virology, biological weapons, research, frontier models, bio lab, artificial intelligence, expert-level skills, human scientists, biosecurity, technology, laboratory tasks, scientific innovation
Corporate capture of AI research—echoing the days of Big Tobacco—thwarts sensible policymaking.
ai safety, bad evidence, ai policy, flawed benchmarks, corporate influence, transparency, accountability, safety data, research environment, structural reforms, trustworthy data, ai research, evidence-based policy
AI is poised to leave a lot of us unemployed. We need to rethink social welfare.
ai job loss, social insurance, ai displacement, future of work, automation, labor market, economic policy, job displacement, workforce adaptation, technology impact, unemployment, us economy, ai policy, social safety net, employment insurance
Continued sales of advanced AI chips allow China to deploy AI at massive scale.
ai, china, h20 chips, advanced gpus, technology export, global ai race, us-china relations, semiconductor industry, technology policy, national security, america's ai edge, trade restrictions, tech competition, geopolitical tension