One morning in the near future, on far-flung servers many miles from Wall Street, a new type of organization begins buying and selling stock. Its mission: maximize return on investment. It uses a network of AI agents integrated into global trading platforms to buy and sell stock in milliseconds — fast, adaptive, and unburdened by human fatigue.
This is much more sophisticated than today’s algorithmic traders. These agents aren’t just executing trades based on preordained rules and thresholds. They’re operating autonomously: using analysis to identify new markets, acquiring controlling interests in companies, and making complex strategic decisions typically left to human traders. By noon, the AI organization owns significant stakes in a dozen firms. It begins using its insider knowledge to front-run trades — an illegal practice similar to insider trading. But there’s another twist: thanks to distributed blockchain technology, this organization’s human owners are completely anonymous. Authorities are unable to identify accountable individuals, and by the time they attempt to intervene, the AI organization has already influenced confidence in the regulator’s ability to stabilize the market.
This scenario may sound speculative, but such AI-enabled Autonomous Organizations (AAOs) are closer to reality than many people assume. While humans may initially build and deploy these organizations, they will be run largely or entirely by artificial intelligence — capable of acting in pursuit of goals, adapting to their environment, and coordinating action at scale, with limited or no direct human oversight. Their human initiators will sit behind the scenes, their identities opaque and insulated from any misdeeds. Eventually, the inception of an AAO itself may occur without human involvement at all, the result of autonomous action from a sufficiently resourced AI.
Want to contribute to the conversation? Pitch your piece.
Whether embedded in a legal corporation, deployed on decentralized infrastructure, or emerging from swarms of interlinked agents, the rise of autonomous organizations raises a fundamental question: How do we govern and regulate institutions that act autonomously, continuously evolve, and operate without regard to jurisdictional boundaries?
AAOs can be envisioned as digital entities that not only execute tasks, but also plan, reason, and adapt over time — without requiring human instruction beyond their initiation. They have the potential to be astoundingly productive and innovative. With their ability to analyze data, make decisions, act, and reason at superhuman speeds and scales, it’s hard to imagine an industry sector that AAOs couldn’t transform.
The idea of machine-led institutions isn’t new; the foundational ideas date back decades. But autonomous organizations are more viable than ever before, thanks to recent advances in two key technologies: autonomous agents and blockchain platforms.
Autonomous agents, also called AI agents, have only recently seen real-world promise. Unlike regular generative AI that reacts to user prompts, AI agents are capable of carrying out complex reasoning, performing rapid tasks. This makes them essential for AAOs.
AI agents, however, are still constrained to relatively simple assignments. In a recent experiment, Carnegie Mellon University researchers created a simulated software company, which they staffed with AI agents powered by LLMs from Anthropic, Google, Meta, and OpenAI. The best-performing model only completed 24 percent of its assigned tasks, at a relatively high compute cost per task. This level of performance and reliability is nowhere near what would be required for a successful AAO. However, this may not continue to be an obstacle for much longer. A meta-analysis from METR showed that AI models are steadily improving at their abilities to accomplish longer, more complex tasks with fewer failures, with the length of software engineering tasks they can accomplish doubling every seven months.
Blockchain platforms are most famously associated with cryptocurrency, but their essential function is to provide a decentralized, anonymous, autonomous record of transactions. Blockchain technology isn’t essential for AOs, but it does allow stakeholders another mechanism for decentralized control of organizational entities. What’s more, several new blockchain developments make it significantly easier to launch an autonomous organization.
In the crypto ecosystem, blockchain-based AI DAOs — Decentralized Autonomous Organizations augmented with AI — represent a visible emergence of AOs. These organizations combine smart contracts — self-executing agreements between two parties that automatically execute when specific conditions are met — with AI capabilities to automate governance and decision-making, and perform various back-office crypto operations.
Tether, a leading stablecoin issuer, recently launched a platform allowing AI DAOs to accumulate digital assets. The blockchain platform Cardano launched a test environment for AI agents to perform high-frequency trades with one another. Meanwhile, some DAO communities are experimenting with using AI to screen proposals and allocate voting power.
AAOs capable of sophisticated, multistage operations — or widespread, untraceable mayhem — aren’t here yet. There are, however, companies across multiple sectors deploying systems that look like early versions of what’s to come. In finance, algorithmic trading firms like XTX Markets, Vesper, and Bridgewater are using reinforcement learning agents to provide clients with market forecasting. Amazon has deployed adaptive supply chain management systems that operate in many ways like AAOs. Waymo uses edge computing to allow its autonomous vehicles to make complex, on-the-fly decisions without human intervention. While these AI operations are grounded within existing corporate structures, they demonstrate the potential for fully autonomous organizations that operate without human direction and resources.
AOs are a double-edged sword. While they have enormous potential to improve innovation and efficiency across numerous sectors, they also threaten to upend markets in unpredictable ways. And it’s not only the business models and offerings within those markets that are in jeopardy. AAOs pose a threat to the mechanisms that help to keep markets from spinning out of control: regulations.
Today, most regulators design their governance regimes based on three foundational assumptions:
AI Autonomous Organizations challenge all three assumptions. Identifiability is murky. Enforceability is fragile. Jurisdiction is largely irrelevant.
Normal corporations are incorporated as legal entities, but some AOs may operate entirely pseudonymously, governed by AI agents or token-based decision rules. Others may be launched by known individuals, only to evolve beyond their original boundaries through recursive self-evolution.
Even if some AOs can be tied to human actors, the layers of abstraction — via code, smart contracts, or digital agents — could shield them from liability. There may be no “person” or “legal entity” on whom to serve penalties or revoke a license.
Like the cloud infrastructure and blockchains they’ll operate on, AOs won’t be constrained by geography. Enforcement tools tied to physical or legal borders will struggle to contain these digital nomads. This creates a regulatory vacuum. If something goes wrong — market manipulation, data abuse, or other societal harm — there is no CEO to question, no headquarters to sanction, and no clear legal recourse. This requires a shift in the approach to governance and regulation.
Governing AOs — whether they are blockchain-native DAOs or cloud-native AI collectives — requires a shift in thinking. We must design frameworks that do not assume human actors are always “in the loop.” Instead, governance must be embedded in code, enforced at the protocol layer, and coordinated across borders. If accountability is opaque, it must be assigned and decision rights clarified.
Policymakers will need to develop adaptive legal standards that mandate transparency, traceability, and accountability — regardless of whether humans or AI make the decisions. For example, any AAO that holds assets or executes trades should keep records in a way that is publicly auditable. Further, AAOs should be required to have mechanisms that allow regulators to intervene when certain events occur — for example, a “kill switch” that regulators can flip when an AAO begins making transactions that threaten to destabilize broader financial markets.
Regulatory sandboxes can be used to test governance mechanisms before deployment. For example, how does an AI agent handle conflicting incentives? Can we embed compliance logic directly into smart contracts or model weights? Providing regulatory visibility into the underlying performance of AOs operating in regulated markets will be key in a world where real-time analytics form an essential element of systemic risk assessment and mediation. Sandboxes can give regulators further visibility into emerging behaviors and risks before they cause harm at scale.
Researchers must support this evolution. Legal scholars, AI ethicists, governance bodies, standards developers, and technical experts should collaborate on new oversight models — ones that embed auditability, fairness, and accountability into the foundations of autonomous operation. Scholars might also explore how real-time regulatory agents (AI systems tasked with oversight) can monitor AOs and trigger alerts when their behavior diverges from acceptable norms.
International coordination will be essential — and challenging — for any regulations to have teeth. No single country can effectively govern AI agents that act globally. Just as international law evolved to govern aircraft, nuclear technology, space satellites, and multinational corporations, so too must it evolve to address AOs. International standards, harmonized regulation, mutual recognition of enforcement, and shared protocols will be needed.
In Jewish mysticism, a golem is a powerful humanoid created from clay and animated through sacred words. Golems obey their maker’s commands — but without a soul, they lack judgment. In many stories, they spiral out of control, bringing ruin where they were meant to serve.
AI Autonomous Organizations are digital golems: constructs of immense capability, following the guidance encoded at their creation. But unlike myth, we cannot rely on ancient words to undo the harm if things go awry. It’s on us.
We must act now to ensure that as AI takes on institutional form, we embed the safeguards necessary to align it with human values, legal norms, and societal well-being. If we succeed, the economic value and productive impact could be astounding. And, if we fail…
See things differently? AI Frontiers welcomes expert insights, thoughtful critiques, and fresh perspectives. Send us your pitch.
Corporate capture of AI research—echoing the days of Big Tobacco—thwarts sensible policymaking.
Autonomous systems are being rapidly deployed, but governance efforts are still in their infancy.