The United States and China are in a fierce competition to develop and deploy more capable AI systems, as well as to control the AI supply chain. Each side is driven by logical geopolitical and economic objectives. However, unchecked escalation from either carries serious risks that could undermine global security.
Although China has historically lagged in AI development, it has spent the last decade heavily investing in semiconductor supply chain autonomy and is continuing to invest across the ecosystem. Washington, meanwhile, is not taking its current lead for granted. The Trump administration is pursuing strong measures, such as the AI Action Plan, to continue leading the world in this critical technology.
But the US should not default to a posture of unmitigated confrontation that would imperil not just the US and China, but the world. Leading companies and independent experts have stated that AI systems may soon pose catastrophic risks to humanity. These include aiding cyber and biological attacks, and the possibility that humans may lose control over these systems. OpenAI recently stated, when discussing the capacity of their models to aid in creating biological weapons, that they “expect current trends of rapidly increasing capability to continue, and for models to cross [OpenAI’s threshold for ‘high’ risk] in the near future.”
US policy can take these threats to global security seriously, while still advancing its strategic goal of leading AI development. A dual-track strategy is needed, one that reflects the urgency of prioritizing strategic competition as well as the reality that China will inevitably develop advanced AI — even if their capabilities remain behind those of the US.
Despite the widely held belief that Washington and Beijing are locked in a zero-sum conflict over AI, the Trump administration’s transactional approach and focus on bilateral deal-making may provide unexpected openings for cooperation with Beijing to mitigate shared risks from increasingly dangerous AI systems.
The gap between American and Chinese AI has closed substantially in recent years. Today, the best American AI systems tend to be matched by Chinese ones some number of months later, and the performance gap at any given time is smaller than it used to be.
Washington has sought to widen this gap by controlling exports of chips and other critical AI hardware to China, which has had the unintended effect of pushing Chinese researchers to be more efficient with the chips they have access to. Researchers at DeepSeek, as an example, were very effective at squeezing maximum performance out of a limited number of chips when developing the breakthrough R1 model, which outperformed several top US models at a fraction of the compute.
On net, export controls on both chip-building equipment and on chips themselves have slowed the rate at which China is catching up, but they are only postponing the inevitable: a world in which China has very powerful AI capabilities, even if they do not equal American AI capabilities.
/odw-inline-subscribe-cta
The idea of a unipolar artificial general intelligence (AGI) scenario — where the US alone has very advanced AI capabilities, as advocated by Anthropic CEO Dario Amodei — is contrary to the nature of AI technology. The last decade has shown that scale and skill are the key things needed to achieve powerful AI capabilities. That is, scaling up ever-larger neural networks, trained on ever-larger and higher-quality datasets, by very skilled researchers, goes a very long way to achieving AI breakthroughs.
China lacks scale, but has more than enough skilled engineers to achieve very powerful systems. This leaves the main remaining question one of when: Washington cannot indefinitely forestall China’s acquisition of more computing power.
AI “scaling laws” give us additional reason to think that a multipolar scenario is inevitable. This area of research shows that as you scale up computing power and data, you continue to get returns — but these returns “scale” in a diminishing way, such that a tenfold increase in computing power gives you much less than a tenfold increase in performance. Put another way, the US can’t count on scaling forever.
Given these considerations — along with the fact that AI systems are likely to pose catastrophic risks to people everywhere — Washington should cooperate with China to mitigate risk. And it can do so without giving up on vigorous competition.
Obviously, the US and China share an interest in the continued survival of the human race, which many experts increasingly believe could be threatened by superintelligent AI systems that escape our control.
/odw-pull-quote
To govern AI at least as well as, and ideally better than, nuclear technologies, the US should attempt to reassure Beijing that the US does not mean to destabilize their authority.
It’s in between the extreme upside and downside possibilities of AI — and in particular, questions about relative gains and the distribution of power — that American and Chinese interests come apart. Disentangling competition from cooperation on AI is challenging both because it’s an inherently dual-use and general-purpose technology, and because the same events often have implications both for competitive standing and for wide societal benefits.
Indeed, even while saying that DeepSeek’s R1 launch earlier this year “should be a wake-up call” for the American AI sector, President Trump also noted that Americans will stand to benefit from the greater efficiencies that DeepSeek has achieved.
Fortunately, balancing competition and cooperation for a general-purpose technology is not unprecedented. Consider an analogy to nuclear energy and nuclear weapons: During the Cold War, the US and the Soviet Union vied for nuclear dominance for decades. Commercial interests from both nations competed to sell nuclear power technology to developing countries, and military leaders maneuvered to secure strategic nuclear advantages over their counterparts on the other side of the Iron Curtain.
Meanwhile, the two great powers also signed several mutually beneficial arms control agreements, supported an international regime on nonproliferation, and established “nuclear hotlines” to avoid unintended escalation in crisis situations.
To govern AI at least as well as, and ideally better than, nuclear technologies, the US should attempt to reassure Beijing that the US does not mean to destabilize their authority. Instead, US leaders should signal they are interested in cooperating with China on developing guardrails that keep either side’s systems from causing catastrophic harm.
How the US frames and pursues its AI leadership will affect Chinese perceptions and reactions.
Reassuring Beijing that America does not intend to use AI to intrude on other countries’ dominion could give Chinese leaders more breathing room to take appropriate precautions with regard to AI development and deployment. Conversely, sending the impression that US AI leadership is an existential threat to the Chinese Communist Party’s (CCP) domestic power could push Beijing to take extreme competitive actions. These could range from cutting corners on safety to declaring war.
It’s worth briefly considering the possible stakes of escalating conflict over AI. If the US pursues AI primarily to achieve military supremacy and political influence abroad (e.g., stirring up dissent within China), rather than broad-based national and global economic growth, the CCP will be more likely to perceive American AI as an existential threat. Chinese leadership may then pursue disruptive reactions like accelerating plans to invade Taiwan, targeting semiconductor chokepoints, or consolidating their AI sector around a national champion capable of overtaking the US.
If AI were perceived as existential to continued CCP rule (which does not seem to be the case quite yet), Beijing could make a dramatic push towards a 5 or 10 gigawatt datacenter much faster than the US could. While the Biden and Trump administrations have taken steps to remove bureaucratic obstacles to building new energy sources, it’s unlikely the US can beat China at energy infrastructure scaling.
The US and China also need to cooperate to prevent non-military AI disasters. Even as the two superpowers push the state-of-the-art in AI, it becomes easier for others to replicate these systems. This increases the risk that a rogue actor might use advanced machine learning to cause harm. For example, they might leverage an LLM fine-tuned in biochemistry to invent a novel pathogen, synthesize it, and unleash it on the world. Eliminating the risk of such misuse altogether is difficult, but more feasible if the two countries work together. They might draw inspiration from the Frontier Model Forum, established to drive AI safety between tech companies and other organizations.
We don’t think this is a categorical issue, where all capabilities should be competed on and all safety and security should be shared — things are not that simple, and in practice, these areas are deeply intertwined. But the same could be said of nuclear technology. We believe that with concerted effort, a balance can be struck, and we are encouraged to see early efforts identifying potentially promising topic areas for dialogue and collaboration between the US and China.
Political junkies in the US might feel like Trump’s America First style of leadership is antithetical to a collaboration with China on AI safety.
Perhaps surprisingly, given how common zero-sum rhetoric has become, we think the middle path between cooperation and competition is more politically plausible than many think.
/odw-pull-quote
If the US is no longer prioritizing the primacy of a liberal international order... then it may become easier for the two nations to cooperate on singular policies that help protect against broader existential risks.
Scholar Dean Ball has argued that Republicans might actually take a more focused approach to catastrophic AI risks than Democrats. For example, the Biden administration tended to bundle AI policy with a broad range of social issues rather than focusing on its singular risk profile. Indeed, President Trump and key figures surrounding him have demonstrated serious, common-sense concerns about the safety and security risks from advanced AI. Trump’s January 25 comment about AI becoming "the rabbit that gets away" suggests he has an awareness that losing human control over powerful AI systems is a key risk to guard against. If this awareness is translated into policy, it could form the basis for serious bilateral action.
Despite his current administration’s resistance to multilateralism, Trump’s first term was oriented toward deal-making driven more by shared interests than shared values. A return to this kind of focused, transactional strategy could yield bilateral agreements on AI safety. If the US is no longer prioritizing the primacy of a liberal international order — a longstanding source of friction with China due to differences in everything from human rights to environmental policy — then it may become easier for the two nations to cooperate on singular policies that help protect against broader existential risks. Elon Musk could also play a role in shaping these policies. He has influence within the Trump administration, business ties with China, and is a longstanding advocate for AI risk mitigation.
Cooperation on AI safety would also serve US economic interests. Indeed, clear American leadership on safety and security — and a commitment to sharing lessons learned where appropriate — could improve the appeal of American products in global markets. The Space Race provides an illustrative example of how international cooperation can spur US competitiveness.
After the original “Sputnik moment,” it became politically critical in the US, and for America’s prestige abroad, to compete vigorously in space technology and space exploration. But the US tried hard to bring the world along in its achievements. The US built and led institutions like Intelsat and worked to avoid space collisions and debris, safeguarding this new frontier for all countries. The US can take inspiration from this history and aim to lead on both AI technology and AI governance.
In his inaugural address, President Trump said he wants to be remembered as a peacemaker and a unifier. Whether he can balance those ambitions with his drive to advance American AI — and negotiate a meaningful “AI deal” — may come to define not only his second presidency, but the security of the entire globe.
There are various ways the US can hedge against spiraling international conflict over the militarization of AI. The US-led AI Political Declaration, now endorsed by 58 countries, provides a key framework that sets a shared standard for responsible military AI use. The Pentagon should also explore ways to back these rules with confidence-building measures.
As work on these efforts progresses, the US and China will need to agree on how to make their commitments verifiable by one another — likely by tracking the use of computing power. The two superpowers can look for inspiration in the Cold War era’s Joint Verification Experiment for verifying nuclear treaties.
More generally, the US can signal that it is not racing recklessly towards an AI war with China by taking measured steps to protect its own security. Towards this end, the US federal government can invest materially in security-related efforts that are in the global interest rather than just in the national interest, such as patching vulnerabilities in open source code at scale. By doing so, the US can show that American AI leadership can benefit everyone, and make it more likely that allies support American, rather than Chinese, AI leadership.
It will be difficult to find a middle ground between naively trusting Chinese overtures on safety and viewing AI entirely in zero-sum terms. Negotiations will be difficult during the current trade war, but we have limited time to spare given the rapid pace of progress in AI capabilities.
The current policy window — before Beijing fully mobilizes its industrial base toward AI development — is an opportunity to establish global safety and security. Proactive steps now may prevent more dangerous dynamics from emerging later.
Acknowledgments: Thanks to Jordan Schneider, Michael Horowitz, Larissa Schiavo, Jason Hausenloy, and Jeffrey Ding for helpful feedback on earlier versions of this post. The views expressed here are the authors’ own.
Realizing AI’s full potential requires designing for opportunity—not just guarding against risk.
Corporate capture of AI research—echoing the days of Big Tobacco—thwarts sensible policymaking.