China and the US Are Running Different AI Races

Shaped by a different economic environment, China’s AI startups are optimizing for different customers than their US counterparts — and seeing faster industrial adoption.

on
Feb 12, 2026
Most Recent

China and the US Are Running Different AI Races

Shaped by a different economic environment, China’s AI startups are optimizing for different customers than their US counterparts — and seeing faster industrial adoption.

Feb 12, 2026

High-Bandwidth Memory: The Critical Gaps in US Export Controls

Modern memory architecture is vital for advanced AI systems. While the US leads in both production and innovation, significant gaps in export policy are helping China catch up.

Making Extreme AI Risk Tradeable

Traditional insurance can’t handle the extreme risks of frontier AI. Catastrophe bonds can cover the gap and compel labs to adopt tougher safety standards.

Exporting Advanced Chips Is Good for Nvidia, Not the US

The White House is betting that hardware sales will buy software loyalty — a strategy borrowed from 5G that misunderstands how AI actually works.

Dec 15, 2025

AI Could Undermine Emerging Economies

AI automation threatens to erode the “development ladder,” a foundational economic pathway that has lifted hundreds of millions out of poverty.

Dec 11, 2025

The Evidence for AI Consciousness, Today

A growing body of evidence means it’s no longer tenable to dismiss the possibility that frontier AIs are conscious.

Dec 8, 2025

AI Alignment Cannot Be Top-Down

Community Notes offers a better model — where citizens, not corporations, decide what “aligned” means.

Nov 3, 2025

AGI's Last Bottlenecks

A new framework suggests we’re already halfway to AGI. The rest of the way will mostly require business-as-usual research and engineering.

AI Will Be Your Personal Political Proxy

By learning our views and engaging on our behalf, AI could make government more representative and responsive — but not if we allow it to erode our democratic instincts.

View all
Subscribe to AI Frontiers

Stay informed on the future of AI alongside

30,000+

 other subscribers

Thank you for subscribing.
Please try again.
Recent News in AI
Anthropic Dials Back AI Safety Commitments
The Wall Street Journal
Feb 25

TLDR

Anthropic relaxes AI safety standards
  • Policy Shift: Anthropic eases safety rules to stay competitive
  • Competitive Pressure: Rival AI labs push rapid model releases
  • Regulatory Gaps: Lack of federal AI regulation cited
  • Ongoing Commitment: Pledges regular safety reports, third-party audits
Pentagon Gives A.I. Company an Ultimatum
The New York Times
Feb 24

TLDR

Pentagon pressures Anthropic over AI use
  • Ultimatum Issued: Pentagon demands Anthropic comply or face legal action
  • AI Model Concerns: Anthropic seeks limits on surveillance, autonomous weapons
  • Military Reliance: Claude model seen as superior to competitors
  • Broader Implications: Raises questions on AI ethics, government leverage
US threatens Anthropic with deadline in dispute on AI safeguards
BBC
Feb 24

TLDR

Pentagon pressures Anthropic on AI use
  • Pentagon Deadline: Anthropic must allow broader military AI use.
  • Anthropic Red Lines: Refuses autonomous weapons, mass surveillance applications.
  • Legal Threats: Defense Production Act may force compliance.
  • AI Safety Stance: Anthropic emphasizes responsible, transparent AI deployment.
Who Can Break the AI Safety Deadlock?
Bloomberg Business
Feb 24

TLDR

AI safety governance stuck in deadlock
  • Empty Declarations: Global AI summits yield non-binding, ineffective statements.
  • Superpower Stalemate: US and China unlikely to lead on safety.
  • Middle Power Role: Coalition could demand real safety commitments.
  • Urgent Harms: AI risks already impacting society and individuals.
Algorithmic opacity in opioid risk scoring and the need for transparent AI regulation - npj Digital Medicine
Nature
Feb 24

TLDR

Opaque AI risks in opioid scoring
  • Algorithmic Opacity: Proprietary system lacks transparency and explainability
  • Poor Reproducibility: Independent tests show much lower accuracy than claimed
  • Socioeconomic Bias: Adding covariates did not improve model performance
  • Regulatory Need: Highlights urgent need for transparent AI oversight

Subscribe to AI Frontiers

Thank you for subscribing.
Please try again.