The race is on for AGI. Tech companies are in a global race to develop artificial general intelligence (AGI): autonomous systems that perform most tasks as well as a human expert.
In early 2024, Meta CEO Mark Zuckerberg declared that Meta is going to build AGI and “open source” it. He is walking his talk. Meta has invested billions of dollars in the highest-power computational elements needed to build giant AI systems, and it has openly released its most powerful AI models. In early 2025, representatives of the Chinese company DeepSeek tweeted their intention to build and openly release AGI. US companies (including OpenAI, Google DeepMind, and Anthropic) are also trying to build AGI.
While these companies have not pledged to open-source such a system, recent months have seen a marked shift among U.S. policymakers and AI developers toward support for open-source AI. In July, the White House AI Action Plan highlighted the need for open-source development as part of its plan to accelerate AI innovation. In early August, OpenAI released an open weight frontier model called gpt-oss, further signaling a growing willingness by major U.S. players to make advanced models openly available. Even if AGI were not deliberately released, there is a strong historical record of AI models being copied, leaked, or stolen.
AGI would have dramatic consequences. It is therefore crucial that the world understands what it would mean for AGI to be developed and proliferated, whether deliberately or inadvertently. As we will see, the prognosis is not good: releasing AGI into the world would be a terrible mistake, one of the most irresponsible actions in history — one that could lead to humans’ being replaced as Earth’s dominant beings. AGI should not be developed until and unless it can definitely be kept under control.
AGI refers to machines that can exceed human competency across many tasks. There are a number of definitions of AGI, and some nuanced taxonomies and alternative terms have been proposed. But, to understand what companies mean when they say they are building it, we can look to what company leaders or founding documents themselves have said. For example, they have variously defined AGI as: “highly autonomous systems that outperform humans at most economically valuable work” (OpenAI's Charter), or as “a machine that can do the sorts of cognitive things that people can typically do, possibly more” (Shane Legg, DeepMind co-founder).
Another useful way to think of AGI is as autonomous general intelligence, emphasizing the fact that it can match or exceed human experts at a breadth of tasks, working completely unsupervised.
AGI promises many performance advantages over human experts. By any of these definitions, AGI would be capable of cutting-edge scientific research, managing companies, crafting legal arguments, making movies, writing books, and designing new and improved software. It could open cryptocurrency wallets (or bank accounts, if allowed), send payments, and rent computer servers on which to run copies of itself — all without any human oversight.
And, while doing all these things at an expert human level, it would also be inhumanly capable in many other ways. What human can generate a new image, video, or novel in seconds? Does any human speak 95 languages or program in 15 programming languages? Are there people who are experts in physics, computer science, bioengineering, law, and dozens of other disciplines, all at once?
Moreover, digital intelligence has structural advantages. An AI model that learns a new skill, as AGI would, can simply be copied to become many AIs that know that skill, given the computer hardware to run them on. AI models can share “weight updates” that instantly transmit new knowledge and capability to other AIs. And computer power can simply be scaled up to make an AI run faster. A given human runs at only one speed.
AGI was once a hypothetical concept mused over by philosophers. Not anymore. Executives at Anthropic, Google, and other major AI companies have said that they expect the technology within a decade, and some predict it will be much sooner. These timelines line up with many individual AI experts and expert predictors of tech progress.
Skeptics who are unswayed by the evidence that AGI is coming should consider instead the intent. If someone is trying very, very hard to do something that is potentially dangerous to humankind, isn’t it at least worth asking: should they be trying?
Trying to achieve AGI is a dangerous act. Some of the largest companies on Earth are collectively spending hundreds of billions of dollars and employing many thousands of people to build AGI. This effort greatly exceeds the costs and exertions of either the Manhattan Project or the Apollo program. What happens if they actually succeed? And what if it gets released widely into the world?
It is crucial to understand that any AGI that is made open-source, or insufficiently protected against leaks, would proliferate to a wide number of users. And, this widely available AGI would, sooner or later, have no guardrails.
Even well-designed AI guardrails can fail when models are released. Many AI systems do have guardrails, currently. If you ask ChatGPT, Gemini, Claude, or many other online AI platforms to help you design a bomb or write hate speech, they will refuse. This is not because they are intrinsically moral, but because they have been trained to act in line with general ethical precepts and to refuse certain types of requests. Moreover, using the platforms in these ways violates their terms of service. These guardrails can be circumvented: savvy users can “jailbreak” online platforms, forcing them to generate banned outputs. But, with effort, platforms could address the weaknesses that allow jailbreaking. They could also monitor customer use, shutting down egregious violations of their terms or the law.
Now consider an AI model that is openly released. In that scenario, the model weights (the billions or trillions of numbers that describe the AI's neural network), the architecture, and potentially the source code are made available. This could also happen unintentionally, from a theft or leak of such information. A sufficiently capable AI — such as an AGI or one of its precursors — could even autonomously extract and transmit its own weights and source code to the outside world, a process known as “self‑exfiltration.” In any case, the ease of copying software and the utility of the model would drive widespread proliferation.
/inline-pitch-cta
For AI that is anything like today's, that widespread availability would completely undermine any guardrails or safety measures built into the system, because the very training process that embeds guardrails can just as easily be used to strip them away. Again, history bears this out. Shortly after each of Meta’s Llama releases, users created “uncensored” versions of the models. Similarly, Unstable Diffusion forked off of the open-source Stable Diffusion and has been used for non-consensual deepfake sexual abuse. Although training giant AI systems from scratch is extraordinarily expensive, the “fine-tuning” necessary to disable safety features is not. (In the case of Llama 2, it was done for about $200 in server fees.)
It is quite clear that, soon after any powerful AI system with safety guardrails is released, we will see copies of it available with those restrictions removed, because someone with the know-how will want the AI to do things that it currently won’t. Or they’ll try to remove restrictions, “just for the lulz.”
AGI without guardrails will be capable of causing immense harm. But, with AGI, the risk isn’t simply that somebody will build a chatbot capable of saying uncensored spicy things or generating hateful and harmful content. It’s that an ultraclever agent can go and take actions in pursuit of a wide range of goals, no matter how potentially disruptive or destructive those goals may be. It is an expert in everything, but it has none of the responsibility that expertise usually comes with. Imagine it as a nuclear physicist and a virologist and a software security expert, but without conscience or compunction about building bombs, biological weapons, or computer viruses.
Finally, and probably worst, an AGI would be smart enough to acquire the resources it needs to hide, protect, defend, maintain, and improve itself. This would allow the AGI to accumulate power, with no internal checks on how that power is exercised.
Unleashing AGI would put criminals, rogue states, and terrorists on equal footing with governments. Proliferating AGI without guardrails or alignment would constitute an immediate and enormous security threat. It would, for example, instantly demolish the huge lead in a wide variety of strategic technologies — and brainpower — that the US and its allies hold over criminal organizations, rogue states, and terrorists. This would be tantamount to, for example, sending top US experts in nuclear technology, virology, computer security, and other critical domains to share their knowledge with North Korea.
AGI could also help defend against such security threats, of course. But the US and its allies already have top-notch physicists and virologists. Their security would not benefit from leveling the playing field relative to their enemies.
The unprecedented risk is rogue AGI. It gets worse. Proliferating expertise in weapons of mass destruction and disruption should keep national security experts up at night. But many AI experts believe those risks are secondary. They worry that, without strong safeguards, nothing would prevent AGI from breaking its obedience to humans.
In other words, proliferating AGI, freed of constraints, would constitute an invasive species of intelligence on Earth. Consider the cane toad, the zebra mussel, and the kudzu vine. These are just three examples of biological species that were introduced, either deliberately or accidentally, into very welcoming and relatively undefended environments in which they could proliferate. Once established, these invasive species are incredibly difficult to control or eradicate.
AGI would become an intelligent invasive species. Our current digital world would constitute such a welcoming environment for released AGI, which would have plenty of capability to do the key things that make biological species invasive: acquire resources, protect itself, and multiply.
And unlike other invasive species, AGI will have copious intelligence — as much or more than even the most intelligent, highly educated, and expertly trained humans. Insofar as AI systems have goals (whether given to them by humans or driven by their own opaque logic), they will benefit from gathering and controlling resources. They might take jobs and create other schemes to make money, persuade (or hire) people to work with them, advocate eloquently for themselves, and in general do all of the things humans and groups of humans do. Only they’d do it faster, cheaper, and probably better.
AGI might quietly replace humans. What would such an invasion look like? Nobody knows for sure, but one scenario, called gradual disempowerment, might not be too different from what is already happening in frontier AI. There’d be exciting new AI products with features that make them seem more and more human. There’d be debates about whether the new AI is really sentient, or just seems that way. There’d also be debates — including powerful arguments from AGI itself — on whether an AI should be allowed to give direct medical advice, serve as a lawyer, or run a company. (Meanwhile, AGIs would de facto be doing all three, and much more besides.)
While those debates raged, AGI would be integrated into just about everything. Increasing numbers of people would be squeezed out of their jobs. When online, most users would find it almost impossible to determine whether they are interacting with humans or bots.
Then weirder and weirder things would start being reported in the news: people claiming that coalitions of AI systems were influencing events; massive market disruptions from AI-powered hedge funds; major hacks with unclear motivations and no discernible culprit. It would be difficult to tell if these events really happened and how seriously to take them.
People would increasingly call for more regulation to slow AGI down and rein in Big Tech. But it would be too late. By then there wouldn’t be any human institutions left with the power to quickly and decisively change course. AGI would be everywhere, integrated into the government, boosting productivity, creating billion-dollar companies overnight, generating most of the news, arguing convincingly on its own behalf, and doing most jobs — including a majority of society’s most impactful decision-making jobs — much more efficiently than humans. Earth would have two types of tech-and-tool-using intelligence, and one would be rapidly outpacing the other.
AI is already replacing humans. Let’s return to the seemingly hyperbolic title of this piece. We can talk about two possibilities: (1) the replacement of humans and (2) the replacement of humanity.
The replacement of humans is perhaps obvious: it’s already started. Until a few years ago, essentially every piece of text, image, or video in existence was the product of a human mind. Now, AI-generated media is suddenly everywhere. An unknown but appreciable fraction of new text on the internet is generated by AI. Several of the top YouTube channels are rapidly being taken over by AI. Replika, a chatbot designed to provide companionship, reported over 2 million active users in 2023, with a significant percentage using it for emotional support or simulated romantic relationships. Apps like Character.AI (which allows users to chat with AI personas that include celebrities, therapists, and fictional lovers) have reached millions of monthly users, with average session times surpassing two hours. Meanwhile, businesses and individuals alike are increasingly relying on AI assistants for everything from scheduling and emailing to providing therapy and career advice. Anthropic CEO Dario Amodei recently warned that AI will wipe out half of all white-collar jobs within the next five years. Other CEOs have admitted similar probabilities. A new startup called Mechanize just launched, aiming to accelerate this trend of putting humans out of work.
/odw-inline-subscribe-cta
All of this has happened just two years into the advent of generative AI.
What does this trend look like after a few more years, when AI is in everything? By then, most cultural products may be made by machines; most programs, written by or with machines; most decisions, made (or advised) by machines, based on information generated and assembled by machines. What happens when powerful AI systems replace or just outcompete not just human workers but their managers and CEOs?
AGI will drastically accelerate current trends of human replacement. Even if it’s largely kept under control, AGI will cause the curve of human replacement to bend exponentially. But what happens if it is openly released, so that it proliferates without guardrails or control? Here we face the central problem, which is that while AGI could do nearly all of the intellectual work humans could, it can also do something humans very much can’t do: improve and extend its own basic mental architecture, evolving its capabilities with no human help, in a period of years or less.
Why would AI seek to improve itself? Why would it not? Almost any goal can be better achieved if one is more capable. AGI systems would face competitive pressures, just as people do. However, we do not have the option to upgrade ourselves with bigger or faster brains or to create copies of ourselves with which to cooperate. AI would.
AGI would become a new, dominant form of intelligence. At some point, if AGI continues to upgrade its brainpower and create cooperative clones, it would no longer be a second intelligent species, sharing Earth with humans. It would be a successor species, replacing humans as Earth’s dominant beings.
Although such a future would be profoundly unpredictable, such a succession seems virtually inevitable if AGI is widely released. In this future, these machines are running the economy, running the companies, creating the technologies, generating the ideas, creating the media, and determining what is true. And, unbound by biological timescales, they are doing all of this with speed and sophistication completely unavailable to people. We are not in a position to “turn them off” or fight back. Humanity’s role as custodian of Earth would end.
If AGI did usurp control from humanity, some might assume that it would be kind to us. We might be useful to AGI — at first. But the interests of AGI would almost inevitably diversify and diverge from ours, as independent individuals’ and groups' interests almost always do. Would this future be good, or tolerable, or a hellscape for humanity? What would happen to the biosphere? Would we even continue as a species? We don't know, and it wouldn't be up to us.
AGI is not like other technologies. Developers of powerful AI systems are keen to portray them as tools to empower humanity. And those developers who are making such tools widely available, including by openly releasing them, may genuinely want to “democratize” access to them, to empower many people. But it is critical to differentiate between powerful tools, which remain in the hands of people, and the smarter-than-human autonomous machine intelligence, which is the goal of many AI companies. Anyone arguing that AGI will be like other big technological innovations (like crypto or even the internet) is either misinformed, disingenuous, or not taking it seriously.
Take AGI seriously. Not as a hypothetical but as an encroaching reality. After all, OpenAI does. Microsoft does. Google does. Meta does. And they are backing it up with hundreds of billions of dollars of investment, pivoting their whole business strategies around it. Essentially all reputable major figures in AI research believe that AGI is possible, and a significant portion of these expect it, on our present course, to be deployed within a few short years.
AGI is unpopular with the public and experts. We should then ask: should we stay on this course? The general public is strongly against developing AGI. There are many major risks of developing AGI, including large-scale catastrophes, and even many of the AI researchers building it are afraid of the many dangers it poses to humanity. This essay highlights another: under current conditions, if AGI is developed, it would almost inevitably become widely available, uncontrolled, and, soon after that, uncontrollable.
Tech giants are ignoring the risks. And yet, the major AI companies are proceeding as if they had a mandate on behalf of the rest of us. The global race to build AGI has locked companies into a set of incentives that are simply incompatible with the interests of humanity. Even companies (e.g., DeepMind and Anthropic) that cite the advantages of proceeding more slowly and carefully have been pressured into racing by these incentives. Those demonstrating their intention to develop and openly release AGI clearly signal that they have little to no regard for the risks or consequences of the technologies they are making or for the irreversibility of the actions they are taking.
In principle, it is not impossible to build AGI while also maintaining human dominance on Earth. However, this would have to be a cooperative, deliberate, and international effort quite different from the path we are currently on.
Developing AGI should be a collective decision. What should be done about this? There’s one sense in which this is very simple: stop giving advanced AI an exemption from laws that would meaningfully protect society, and treat it like other high-stakes technologies. Humanity would never countenance an unregulated race to develop and deploy more and more powerful nuclear reactors with zero government involvement or oversight.; We would never allow a widespread release of brand-new, synthetic biological creatures (and certainly not superintelligent ones!) without serious national and international conversation and debate.
Humanity must have that conversation, and it needs to include everyone, not just the tech giants and the politicians they are lobbying. There are strong reasons not to develop AGI — let alone its successor superintelligence — at all. There are also proponents for AGI, touting its transformative promise in science, health, education, and the economy.
But, given the profound risks and unknown outcomes, it is against the interests of every human being on Earth to develop AGI if we cannot control it, or worse, if we are not even trying to control it. And to be clear: the latter is what is happening right now.
Reining in the runaway race toward AGI wouldn’t be easy. The AI industry is fueled by enormous investments, and it is proceeding with high momentum. Money talks — loudly — and time is short. So, while policy proposals exist (which would, for example, impose strong liability, enforce safety and security standards, and require tracking and control of high-powered AI chips), widespread pressure and bold action will be needed to implement them.
Let us create that pressure. Let us require our governments and other institutions to take that bold action.
See things differently? AI Frontiers welcomes expert insights, thoughtful critiques, and fresh perspectives. Send us your pitch.
US lawmakers propose a new system to check where chips end up.
Securing AI weights from foreign adversaries would require a level of security never seen before.