Why Racing to Artificial Superintelligence Would Undermine America’s National Security

Rather than rushing toward catastrophe, the US and China should recognize their shared interest in avoiding an ASI race.

Guest Commentary
Download Audio

In the four years between 1945 and 1949, the United States was the only country to possess the atomic bomb. The world watched in awe and horror as the US demonstrated, by bombing Hiroshima and Nagasaki, that it had gained the power to erase cities.

The atomic bomb granted the US unparalleled international influence. However, the nuclear arms race that followed brought the US closer to destruction than at any point in its history. During the Cuban Missile Crisis, for example, President John F. Kennedy estimated a "1 in 3 to even" chance that the standoff would escalate to nuclear war.

Today, the US is poised to pursue the development of another destabilizing technology: artificial superintelligence (ASI) — AI systems that vastly exceed human performance across nearly all cognitive tasks. In its 2024 annual report to Congress, the US-China Economic and Security Review Commission's top recommendation was to "establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability." 

The US seems to be moving in that direction. Late last year, the US Bureau of Industry and Security (a federal agency that regulates the export of technologies important to national security) released new AI chip and model export controls that aim to starve other would-be AI superpowers of critical hardware and software. Just before leaving office this January, President Joe Biden signed an executive order directing federal agencies to accelerate US AI infrastructure development. And on the day after President Trump took office, he announced his support for the Stargate Project, a privately-funded venture for new data centers and other AI infrastructure. Stargate’s investors have promised between $100 billion and $500 billion — the Manhattan Project cost around $30 billion in inflation-adjusted dollars.   

The race to ASI is not motivated by technological superiority, but rather by the potential ASI has to upend the balance of military power between countries. In the words of Anthropic CEO Dario Amodei, achieving ASI would bring about an "eternal 1991" in which the US and its allies would forever gain a geopolitical advantage akin to the years after the fall of the USSR in which the US was the sole global power.

However, this vision is a mirage: like the nuclear arms race, a US race to develop ASI would ultimately undermine its national security. We call this threat the Manhattan Trap.

The Manhattan Trap presents three dangers: that ASI threatens to provoke an escalation of conflict between the US and other great powers; that a race to develop ASI heightens the risk of losing control of the systems; and, that ASI may undermine the liberal democracy it was developed to defend.

Great Power Conflict

Nuclear weapons have made it possible for states to strike infrastructure anywhere in the world. If ASI provides its wielder with a decisive military advantage—as advocates for racing argue—then China would rationally view a US ASI project as an existential threat to its security. Just as nuclear weapons threaten to overwhelm conventional forces, ASI might be able to neutralize nuclear deterrents—for example, through superhuman cyberattacks—undermining the strategic stability based on mutual vulnerability that has prevented great power conflict for decades.

If China's military planners believed the US was on track to develop ASI, they would face a stark choice: accept permanent subordination to US power, or take military action to prevent US ASI development. Such action wouldn't necessarily mean full-scale nuclear war. China might instead attempt limited strikes or debilitating cyberattacks on US AI infrastructure—data centers, chip fabrication plants, and research facilities. 

Chinese strategists emphasize the concept of "asymmetric strategic stability," which relies on maintaining mutual vulnerability between powers. ASI threatens to eliminate this vulnerability entirely.

Given both sides' desire to avoid nuclear conflict, Chinese leaders might calculate that a limited conventional attack or small, tactical nuclear strike could successfully disrupt US ASI development, while staying below the threshold for full-scale strategic nuclear retaliation.

US attempts to protect its ASI project through secrecy may be futile. An ASI project would require massive amounts of compute, which would tip off adversaries who are monitoring energy usage. Likewise, another superpower would be tracking the movements of notable US AI researchers. Even the Manhattan Project, despite wartime secrecy, leaked critical information to foreign powers through human intelligence. The challenge of securing ASI development against both cyber and human intelligence would be just as difficult, if not more so.

In a world where intercontinental ballistic missiles make preemptive strikes possible, ASI’s strategic value — its potential for decisive military advantage — also makes racing to develop it catastrophically dangerous.

Loss of Control

The argument for an ASI race assumes it would grant the wielder a decisive military advantage over rival superpowers. But unlike nuclear weapons, which require human operators, ASI would act autonomously. This creates an unprecedented risk: loss of control over a system more powerful than national militaries.

The challenge of controlling ASI isn't only about preventing malicious use. Rather, it's about ensuring that an autonomous system with unprecedented capabilities reliably pursues its intended goals. Even seemingly benign goals could lead to catastrophic outcomes if an ASI forms an incorrect understanding of what its creators want. As the AI system's capabilities grow, small misalignments between its objectives and ours could compound into major deviations. Recent research suggests that unintended goals emerge as AI systems become more capable.

The accelerated pace of development implied by an ASI race makes the problem of control even more severe. For ASI to provide decisive military advantage, it would need to not only surpass current military capabilities but also outpace other frontier AI systems being concurrently developed. This suggests an extremely rapid pace of capability improvement—perhaps through automated AI research that allows AI systems to recursively improve themselves. Such a pace would make developing reliable control methods nearly impossible, as existing safety techniques might begin to fail as models quickly become more capable, and the window to develop new methods would be dangerously short.

The competitive pressures of an ASI race would only exacerbate these challenges. Governments would face pressure to cut corners on safety in favor of developing more advanced capabilities. The secrecy requirements of a military program would prevent the kind of open research and broad collaboration needed to solve hard technical problems. We don't need to be certain that controlling ASI would be difficult to see why racing is dangerous. The same assumption that motivates a race—that ASI would grant decisive military advantage—implies that losing control would be catastrophic.

Power Concentration 

A single corrupt official, a successful coup, or even a sophisticated hack could transform the United States from a democracy into a techno-autocracy more absolute than any in history.

Even if the United States somehow develops ASI while avoiding both great power conflict and loss of control, success might still mean failure. The most common argument for racing to develop ASI is that it would allow US liberal democracy to triumph over Chinese authoritarianism. But there's a critical irony: a successful US ASI project would likely destroy liberal democracy from within.

Consider what it means for ASI to provide decisive military advantage over other states. Such a system would, by definition, be more powerful than the combined militaries of global superpowers. The small group controlling this system—whether a government agency, a private company, or some hybrid—would therefore wield unprecedented power not just internationally, but domestically. The ASI's controllers would have a decisive advantage over every other institution in American society, including the military, intelligence agencies, and law enforcement.

This extreme concentration of power is incompatible with democratic checks and balances. The Founders designed the United States’ system to prevent any single institution or individual from gaining such dominance. As James Madison argued in Federalist 51, "ambition must be made to counteract ambition." But how could Congress meaningfully oversee an agency controlling ASI? How could courts enforce their rulings? The traditional constraints on executive power would become merely advisory.

An ASI race would make this problem even worse. The competitive pressure and security requirements of a crash program would demand that ASI be developed quickly and in secret, without public input or democratic oversight. By the time the public learned the full implications, it would be too late—the new power structure would already be in place. Even if the ASI’s initial controllers were benevolent, they would have created a system ripe for abuse. A single corrupt official, a successful coup, or even a sophisticated hack could transform the United States from a democracy into a techno-autocracy more absolute than any in history.

/odw-inline-subscribe-cta

How to Prevent an ASI Catastrophe

The dangers of an ASI race reveal a striking paradox: the same assumptions that favor a race also make it self-defeating. 

If ASI could provide a state with a decisive military advantage, and if states are rational actors who understand this potential, then an ASI race creates three successive barriers that make "winning" nearly impossible. First, a state would need to develop ASI without provoking preemptive military strikes from nuclear-armed adversaries. Then, it would need to solve the control problem under the extreme time pressure of a race. Finally, it would need to prevent the resulting concentration of power from destroying its own political system.

This analysis transforms our understanding of the strategic situation. Rather than being trapped in an inevitable race, the US and China face what game theorists call a "trust dilemma." Unlike a prisoner's dilemma, where both sides are always better off defecting, in a trust dilemma both sides prefer mutual restraint. They will only race if they believe others are racing.

Fortunately, ASI development has characteristics that make mutual restraint promising. Unlike general-purpose AI systems, whose military and civilian applications are difficult to distinguish from one another, an ASI project would be relatively easy for intelligence agencies to monitor. First, ASI development would require massive dedicated infrastructure that would be highly distinguishable from normal AI development. The compute clusters, power requirements, and researcher movements involved would be difficult to hide from modern intelligence capabilities. Moreover, since ASI has not yet been integrated into either country's economy, restrictions wouldn't disrupt existing military or civilian systems.

The US and China could combine multiple complementary approaches to verify that neither country is developing ASI. National intelligence agencies could monitor energy usage and infrastructure development. Third-party auditors could verify compute usage and model architectures. On-chip governance mechanisms could help ensure compliance. Taken together, these methods would make secret ASI development extremely difficult while minimizing the revelation of other sensitive information.

Critics might argue that verification would need to be perfect to be worthwhile. But this misunderstands the strategic situation. If both sides strongly prefer avoiding a race, they only need enough verification to maintain basic confidence that the other is cooperating. The extreme dangers of racing actually make cooperation more robust—the worse the consequences of racing, the more tolerance states will have for imperfect verification.

The US and China should begin by establishing a bilateral dialogue on ASI development. This dialogue should include government officials and technical experts who can speak to ASI's implications. The two countries should work to establish shared understandings of ASI's dangers and develop specific verification mechanisms. This bilateral cooperation could form the foundation for broader international governance. The US and China could lead in establishing an international framework for ASI development control, potentially including an agency analogous to the International Atomic Energy Agency. 

President Trump’s unique foreign policy style may in fact be well suited to pull this off. Breaking free of the Manhattan trap requires America to be strong in the face of Chinese competition. The first step for Trump must be to force China to the negotiating table by taking a strong stand against any attempt to develop ASI, at home and abroad.

Rather than rushing toward catastrophe in a misguided belief that racing is inevitable, the US and China should recognize their shared interest in avoiding an ASI race. Through careful diplomacy and verification mechanisms, they can establish the mutual restraint that both sides prefer. The path to security lies not in racing to develop ASI, but in cooperating to prevent it.

This post was adapted from a report originally published by Convergence Analysis.

Written by
Image: traffic_analyzer / iStock
Continue reading
No items found.

Subscribe to AI Frontiers

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Subscribe to AI Frontiers

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.