OpenAI — once considered an oxymoron given its closed-source practices — recently released GPT-OSS, the company’s first open language model in half a decade. The model fulfills a recent pledge to again release “strong” open models that developers can freely modify and deploy. OpenAI approved GPT-OSS in part because the model sits behind the closed-source frontier, including its own GPT-5, which it released just two days later.
Meanwhile, Meta — long a champion of frontier open models — has delayed the release of its largest open model, Llama Behemoth, and suggested it may keep its future “superintelligence” models behind paywalls. Meta, which once described open source AI as a way to “control our own destiny,” now cites “novel safety concerns” as a reason to withhold its most capable models.
These decisions mark a dramatic pivot for both companies, and reveal how different AI firms are converging on an unspoken consensus: open source behind the frontier, closed source at the cutting edge. These choices aren’t just commercial. Publicly, both firms are invoking uncertainty about the risks of frontier capabilities to justify a persistent gap between open and closed models.
This emerging consensus deserves scrutiny. As frontier models become deeply integrated across the economy, a precautionary approach to development and regulation could have serious unintended consequences for open innovation. It will leave users and businesses reliant on a handful of firms for paywalled services, making capable AI systems less trustworthy, less helpful, and less secure. If frontier models are to become critical technology, as their developers suggest, we need to manage uncertain risks while promoting openness by default at the frontier.
The revival of interest in open source among US firms and policymakers — prompted by open Chinese models like DeepSeek R1 and Qwen3 — masks deeper ambivalence about releasing open language or reasoning models that push the capability frontier. When integrated with other tools, capable models present both opportunities and risks: they could automate complex tasks and supercharge science, but also generate deceptive content, enable cyberattacks, and lower the barrier to entry for dangerous information. Behind the frontier, these risks are increasingly well understood. But at the frontier, a model’s risk profile may be novel and unfamiliar.
In this environment, open models pose unique challenges for policymakers and developers. All else equal, the open equivalent of a closed model will be more susceptible to unauthorized misuse or modification. Open models can be deployed and misused with limited visibility from the original developer. They can be modified — sometimes with trivial ease — to unwind refusal behaviors, thereby exposing or amplifying unsafe capabilities. Crucially, they may be impossible to withdraw from circulation if those capabilities are identified only after release.
For example, we know that, for OpenAI and Anthropic, access restrictions to model parameters (which help to preserve safety-based refusals) and application-layer filters (which help to intercept violative inputs and outputs) measurably lower risk across several categories. Keeping models closed makes it possible for these firms to monitor for malicious use, prevent fine-tuning to unlock dangerous capabilities, and suspend access to unauthorized users, such as the People’s Liberation Army.
But risk management is a shared task. We should be wary of the view that paywalls and firewalls can serve as primary tools of mitigation against catastrophic risk. As with any technology, acceptable mitigation may involve many different actors. For AI, these may include model developers who scrub their datasets and train refusal behaviors; application deployers who filter prompts and monitor activity; and real-world actors who, for example, may block transactions (as in financial services), moderate content (social media platforms), plug vulnerabilities (cybersecurity providers), and control the precursors for dangerous weapons (fertilizer vendors and synthetic biology services).
The fixation on model safeguards as silver bullets has led to a drumbeat of reforms that would chill open model development at the frontier. The open releases of Meta’s Llama 1 and the Chinese DeepSeek R1 were met with alarm by US legislators. Congress has repeatedly introduced bipartisan proposals that would permit only “frontier models with low risk [to] be licensed for open-source deployment.” Such proposals would criminalize the upload — and even the download — of open models. OpenAI and Anthropic have lobbied for developer licensing and export controls for frontier models, while AI researchers call for nonproliferation or enforced gaps — “precautionary friction” — between closed and open releases.
/inline-pitch-cta
Other jurisdictions, such as California, New York, Rhode Island, Massachusetts, and Vermont, have sought to codify or expand the liability of frontier model developers for misuse or modification by third parties. Many of these jurisdictions’ regulations provide carve-outs for low-cost or low-compute models. Yet these carve-outs establish a regulatory cliff near or behind the frontier. Once open models exceed these arbitrary thresholds — as they have in the EU — developers could face uncertain and indeterminate liability, with no clear path to compliance. Legislators, agencies, and courts have offered wildly different interpretations of acceptable risk and reasonable care. Their only guidance is a set of inconclusive standards, contested frameworks, opaque risk thresholds, and nascent customs determined primarily by closed-source practices.
These frameworks put open models at a significant disadvantage, discouraging the open release of capable models with uncertain properties. Closed-source developers, with significant control over the use and custody of their models, can more easily comply with export controls, model licensing, and uncertain liability rules. Open developers, with no custody over their models and more diffuse responsibility for downstream use, must operate in an inherently unfavorable legal environment.
Restrictions of any kind should be a last resort, not a first resort, especially when they limit access to essential information, technology, or services. Instead of implementing untested reforms that unevenly stifle open development, regulators should apply the same approach taken for every other versatile technology of general application. Powerful open models should be permitted by default, and restricted only when they pose an unacceptable risk that cannot be mitigated through less costly means. Determining what is “unacceptable” must weigh proven risks against countervailing benefits.
In practice, this means reforms that chill or restrict open development should be supported by evidence that meets a minimum threshold of confidence. Risk is a product not just of the probability and consequence of harm, but also the confidence of those estimates. Yet, arguments to pause frontier AI, or to restrict it with paywalls, are often based on speculative projections. The 2025 International AI Safety Report noted wide disagreement among experts about the likelihood of losing control over advanced AI systems. It likewise found “no scientific consensus” on the risk of AI being used to manipulate public opinion. For offensive cyber risks, the report determined that “the ultimate impact of AI on the attacker-defender balance remains unclear,” as does “the real-world impact” of AI enabling bad actors to develop and/or deploy pandemic pathogens.
Additionally, there are more fundamental questions about the relationship between model size and capability, as well as the relationship between capability and real-world risk. Because mitigations elsewhere in the supply chain are still effective, and because even the best models currently offer bad actors only negligible advantages, there is little evidence that open models are driving a material increase in catastrophic risk over the baseline today.
We wouldn’t accept restrictions on speech, groceries, electricity, or the web without specific and compelling evidence of harm. We shouldn’t accept such restrictions on a critical technology like AI based on unproven risks.
When the evidence is thin and the opportunity costs are high, precautionary policy can do more harm than good. The practical effect of keeping open models behind the frontier is that every user, developer, and researcher will rely on a few Big Tech firms for capable AI technology. We already depend on a handful of providers for basic digital utilities, from search engines to cloud services and social media. Users and businesses depend on these ubiquitous services while being unable to meaningfully inspect, adapt, or control them.
/odw-inline-subscribe-cta
Similar forms of digital feudalism are playing out in frontier models, where barriers to entry are high and competition is limited. If frontier models are necessary for useful applications, we need equally capable open alternatives to build trustworthy, helpful, and secure tools that solve real-world problems:
Open-source alternatives aren’t inevitable, and the benefits aren’t guaranteed. Frontier models demand immense capital and compute. If continued capability breakthroughs depend on raw scaling rather than on clever optimizations, only a few well-funded firms will be able to develop these models. Only firms with lucrative — and controversial — side hustles may be willing to share them openly without a direct financial return, and even then perhaps only for a time. Because of their significant costs, frontier open models may include restrictions on commercial or widespread use, creating other kinds of financial or technical dependencies for downstream actors.
But, even acknowledging these limitations, promoting open development at the frontier is preferable to the alternative: an economy where workers, small businesses, and public agencies are expected to automate every aspect of their work, but must rent their digital plows from a handful of companies.
Open source does not mean irresponsible, and open development at the frontier does not entail an absolutist commitment to open-sourcing everything. Restricting access to technology is reasonable when that technology is intentionally designed for, or has no plausible application other than, unlawful misuse. Courts have held developers liable for software that facilitates content piracy; authorities have pursued the operators of cryptocurrency mixers for money laundering; and agencies closely regulate the transfer of intrusion software, simulation technology, and weapons control systems under a range of export controls. But these are targeted interventions: gatekeeping access to general-purpose technology is not a sustainable or proportionate response to low-confidence evidence of serious risk.
There are steps we can take to manage uncertainty at the frontier without restricting open development. We can build up the capacity in government to monitor trends of significance, identify emerging risks, and plug gaps in our existing legal framework. Additionally, we can fund research into more robust benchmarking and evaluation standards, which may help to better predict model capabilities and assess how models are likely to be misused and modified in an open environment. These are consistent with the Trump administration’s goal of ensuring that the US government remains at the forefront of evaluating national security risks.
If necessary, we could require frontiers developers to obtain third-party evaluations prior to release, share their findings with organizations like the US Center for AI Standards and Innovation or the UK AI Security Institute, and document a considered release decision that accounts for known and foreseeable risks. These assurance obligations are relatively straightforward. Big Tech firms cried foul when the Biden administration required model developers to report their red-teaming results, and they continue to resist EU requirements to test models and document their findings. Yet there are few compelling public-interest objections to this minimum level of transparency, which is consistent with the approach taken in sensitive domains such as healthcare, finance, and transportation, as well as the nuclear industry (which is frequently invoked by frontier firms). Greater visibility — an early “heads up” — will help to avoid overbroad or reactive intervention that might stifle open development.
If frontier AI capability becomes indispensable, it should not be monopolized by a handful of firms based on uncertain estimates of catastrophic risk. When Prometheus gave us fire, we didn’t lock it away — we shared it widely while writing fire codes, building fire trucks, and investigating firebugs. We should take the same approach to frontier models, which, if shared openly, can help to build more trustworthy, helpful, and secure AI systems. Policymakers should regulate on the assumption of openness, build the capacity to respond to emerging risks, and avoid restrictions based on low-confidence projections.
See things differently? AI Frontiers welcomes expert insights, thoughtful critiques, and fresh perspectives. Send us your pitch.
Rather than rushing toward catastrophe, the US and China should recognize their shared interest in avoiding an ASI race.
A global race to build powerful AI is not inevitable. Here’s how technical solutions can help foster cooperation.