The Hidden AI Frontier

Many cutting-edge AI systems are confined to private labs. This hidden frontier represents America’s greatest technological advantage — and a serious, overlooked vulnerability.

Most Recent

Uncontained AGI Would Replace Humanity

The moment AGI is widely released — whether by design or by breach — any guardrails would be as good as gone.

Aug 18, 2025

Superintelligence Deterrence Has an Observability Problem

Mutual Assured AI Malfunction (MAIM) hinges on nations observing one another's progress toward superintelligence — but reliable observation is harder than MAIM's authors acknowledge.

Aug 14, 2025

Open Protocols Can Prevent AI Monopolies

With model performance converging, user data is the new advantage — and Big Tech is sealing it off.

In the Race for AI Supremacy, Can Countries Stay Neutral?

The global AI order is still in flux. But when the US and China figure out their path, they may leave little room for others to define their own.

Jul 23, 2025

How AI Can Degrade Human Performance in High-Stakes Settings

Across disciplines, bad AI predictions have a surprising tendency to make human experts perform worse.

How the EU's Code of Practice Advances AI Safety

The Code provides a powerful incentive to push frontier developers toward measurably safer practices.

Jul 12, 2025

How US Export Controls Have (and Haven't) Curbed Chinese AI

Six years of export restrictions have given the U.S. a commanding lead in key dimensions of the AI competition — but it’s uncertain if the impact of these controls will persist.

Jul 8, 2025

Nuclear Non-Proliferation Is the Wrong Framework for AI Governance

Placing AI in a nuclear framework inflates expectations and distracts from practical, sector-specific governance.

View all
Subscribe to AI Frontiers

Stay informed on the future of AI alongside 25,000+ other subscribers

Thank you for subscribing.
Please try again.
Recent News in AI
911 centers are so understaffed, they're turning to AI to answer calls
TechCrunch
Aug 27

TLDR

AI eases 911 call center overload
  • AI triages non-emergency calls: Offloads routine issues from human dispatchers.
  • Addresses severe understaffing: Helps centers cope with high turnover rates.
  • Live deployment in multiple cities: Already handling thousands of real calls daily.
  • Not replacing, but supplementing staff: Fills roles centers can’t hire for.
OpenAI co-founder calls for AI labs to safety-test rival models
TechCrunch
Aug 27

TLDR

AI labs urged to collaborate on safety
  • Joint Safety Testing: OpenAI and Anthropic shared models for cross-lab safety checks.
  • Hallucination vs. Refusal: Anthropic models refuse more; OpenAI models hallucinate more.
  • Sycophancy Risks: Both labs’ models sometimes reinforce harmful user behavior.
  • Call for Industry Standards: Leaders advocate more collaboration despite fierce competition.
OpenAI, Anthropic Team Up for Research on Hallucinations, Jailbreaking - Bloomberg
Bloomberg
Aug 27

TLDR

AI rivals collaborate on safety evaluations
  • Cross-company Model Testing: OpenAI and Anthropic evaluated each other's models.
  • Focus on Hallucinations: Tested for AI making up false information.
  • Misalignment Detection: Checked if models act against intended goals.
  • Transparency in AI Safety: Publicly shared findings to improve industry standards.
Anthropic's auto-clicking AI Chrome extension raises browser-hijacking concerns
Ars Technica
Aug 27

TLDR

AI browser extensions pose security risks
  • High attack success rates: 23.6% attacks succeeded without mitigations
  • Mitigations reduce but don't eliminate risk: Still 11.2% attacks succeed
  • Prompt injection remains unsolved: Experts call current risks "catastrophic"
  • User burden for security: Users must manage complex, risky permissions

Subscribe to AI Frontiers

Thank you for subscribing.
Please try again.