This is an excerpt from the authors’ new book, “Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship,” available for preorder now.
Imagine a digital proxy that knows your political preferences as well as you do (or better), tracks every issue on the ballot, and casts votes on your behalf in real time. This vision, once the stuff of science fiction, is quickly becoming technically feasible. Allowing AI to serve as our political proxies sounds radical, but the idea builds on a simple truth about our political system: representative democracy exists because we can’t all be in the room for every decision.
Representative democracy requires elected officials to stand in for the collective preferences of their constituents. The most understandable reason for this is logistical: citizens don’t all have the time or ability to communicate our preferences directly, and we can’t all fit into the legislative chamber to debate the issues at play.
This system doesn’t always work very well. We elect representatives and send them coarse (sometimes downright obtuse) signals about what we want, then rely on them to extrapolate the details. Elections tend to be framed around a few key issues — immigration, taxes, healthcare — with each candidate lined up on one side of an issue or the other. Whoever wins assumes a mandate to figure out the rest. Sometimes we give a little extra signal, such as up or down votes on a few ballot initiatives. Our elected officials do talk with some of us in person, usually a few at a time, and get bulk signals from polls and social media. The result is that not much information is exchanged between people and their representatives, which can make it hard for the public to ensure their representatives’ actions align to constituents’ preferences.
AI will change democracy like no other technology before. AI can change this. In our new book, “Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship,” we discuss how AI’s impact on democracy is likely to surpass that of any past technology, due in part to its unique ability to turn speech into autonomous action. Unlike passive tools, AI agents can execute tasks (from simple commands to complex actions) on behalf of anyone who can issue an instruction. As AI becomes more integrated into connected systems, its potential influence on political processes, including lobbying or even voting, will grow significantly.
AI could ascertain our views, extrapolate our political preferences, and advocate on our behalf. This might happen first in nongovernmental contexts. AI proxies have been proposed to vote on behalf of investment funds that control corporate shareholder votes. For citizens, an AI personal assistant trained on and continuously attuned to your political preferences could advise you on where to make political donations, which candidates to vote for, and which ballot initiatives to support. Already there are websites that do this after asking you a few questions; an AI personal assistant could be a much more sophisticated virtual guide.
Once AIs are commonly in use within government, they could shift the balance of power between lawmakers and their constituents. In particular, they could make ballot initiatives — direct democracy — a more effective and efficient means of enacting policy, which wrests policymaking from the hands of legislators.
Today, most ballot measures are strictly binary: A proposal is precisely defined, and citizens can vote yea or nay. With such a system, even popular policy ideas can fail if the wording of the proposal is slightly off. Proponents of a failed initiative might have to wait years for the next ballot cycle to try again. This limitation has made direct democracy far less flexible than legislating conducted by elected representatives, who have the authority to negotiate, trade, compromise, and refine a bill, and to call votes again and again until they arrive at the finish line.
An AI that learns your preferences could act as your political proxy. Individual voters lack these options because there are so many of us, and because we’re busy. We don’t have time to vote every day and we can’t spend our days researching dozens of issues, or dissecting the language of each of the bills considered by our legislatures.
/inline-pitch-cta
An AI proxy could develop a good understanding of your policy preferences and could help you decide how you will vote. If your position on a new ballot question is clearly indicated by your past behavior, such as your voting history and past interactions with the AI, the AI wouldn’t even need to consult you. If the AI cannot confidently determine your position, it can ask you clarifying questions. If you trust your AI proxy to provide relevant background information, you might not need to conduct independent research. If AI tools like this are built well, and are embraced by both citizens and officials, government could become more responsive to popular input and get more done. That’s an awful lot of “ifs” and “coulds” about trust, but it’s a natural extrapolation of the advocacy tools we discuss in depth in our book.
AI proxies could encourage more voter participation. The most obvious way AI can improve this situation is to help us respond to an increased number of ballot initiatives. Today, nonpartisan voting guides that present the case for and against each initiative on a ballot are a valuable tool for helping voters decide on these questions. These guides often feature several dense pages per question, so they don’t scale well for human readers. If AI could make it easier for you to decide how to vote, people could vote on far more ballot initiatives. Future ballots could have hundreds of items, and be voted on more frequently, allowing voters to control policy with a level of specificity previously unheard of.
Far more speculatively, voters could authorize an AI proxy to vote on their behalf. If voters were individually represented by efficient, effective, trustworthy AI proxies, they could be consulted — through their proxies — more frequently and in greater depth. Ballot initiatives would not need to be binary, or to align with election cycles. Through their proxies, voters could be polled continuously and with an elaborate menu of options, thereby capturing a far more nuanced representation of our individual and collective preferences.
Even more radically, AI could facilitate a new kind of direct democracy, where a combination of voters and their AI proxies collectively make policy decisions. We can imagine a political system where a personal AI is trained on your wants, needs, beliefs, ethics, and political preferences, and then becomes your personal representative in a massive legislature, where millions of these proxies collectively debate and then vote on legislation. If issues you care about most are on the table, your AI assistant could ask you to submit your opinions and votes directly.
This is certainly not democracy as we know it today, but researchers are creating AI tools that could someday enable these new forms of governance. Computer scientists are currently developing a concept called generative social choice, whereby AIs learn individuals’ policy preferences and, collectively and iteratively, generate policy proposals to which a large majority of the humans involved would demonstrably agree to. Eventually, systems like this could even propose legislative language.
AI proxies could help represent the interests of nonvoters in the legislative process: children, animals, even future generations. We could instruct AI proxies to appropriately represent the interests of these groups. This doesn’t equate to extending the franchise to particular AI personas, but it could mean that deliberation and policy formulation could include AI representatives that favor policies that support, for example, forest preservation.
This may sound like fantasy to the uninitiated, but the challenge of internalizing entities who are not franchise holders in a political system is a well- studied problem in economics, law, and political science. UN conventions call for representation of children in legal proceedings. Hundreds of jurisdictions recognize “rights of nature,” analogous to human rights. Choosing to develop and integrate AI systems that speak directly on behalf of these out-groups may be one way to accomplish that. Going further, if the AI systems currently being developed to communicate with animals make meaningful progress, established ideas of nonhuman representation take on new significance.
Of course, implementation of AI proxies can go badly wrong. First, as we’ve discussed in so many cases, this would require tremendous trust between voters and their AI proxy. This won’t work if you worry that it might misunderstand your intent, or misrepresent a ballot question. It won’t work if proxies are inherently biased towards someone else’s views, such as the views of the corporation that built it. And it won’t work if you have to worry about your AI getting hacked.
AI might misrepresent you at the ballot. AI proxies might not provide an accurate representation of your political views. An AI that extrapolates from one response to another might be wrong. This is a classic problem in both statistics and AI; there are methods to understand when a predictive system fails to generalize from one domain, such as one type of ballot question, to another. An AI proxy can guess at your response to a new policy question — real or simulated — and check with you on the result. If the proxy gets it wrong, you can tell it so and thereby update both the tool’s understanding of how often it needs to consult you and your own understanding of how reliable it is. Everyone will have a different tolerance for this kind of failure, so the AI proxy could be trained to consult with its owner as often as they feel is appropriate.
Democracy is slow, redundant, and frequently backpedals. Policy uncertainty also happens in traditional representative democracy, but it is generally resolved rather slowly. Laws are repealed and replaced, amended, challenged in courts, and updated ad nauseam for years. Sometimes policy changes are so chaotic that their implementation is stalled while the legislature, the courts, or the voters sort out what they really mean. This often indicates an unstable equilibrium, a policy formulation that is vulnerable to easy toppling by pressure from any direction. With better representation of our collective preferences, we might find more stable policy formulations. All of society would benefit if that AI could help lead to policies that were truly better for the people they affect and required less political struggle to implement and maintain.
/odw-inline-subscribe-cta
AI proxies could make minority votes matter more. The most optimistic reason to develop systems like these is to make government more equitable. One problem with representative democracy is that it can be anti-majoritarian. We citizens don’t each get our own representative; entire communities vote to select one person who’s supposed to represent all of us. If your democracy does not employ proportional representation and you’re part of a demographic that is consistently in the minority across districts (possibly due to gerrymandering), you might find yourself persistently unrepresented. Just as bad, many representative bodies, such as the US Senate and the Japanese National Diet, do not apportion representation according to population, so some groups have even less power than their numbers might suggest. Even direct democracy can be inequitable in practice, because voter participation is often lower among less affluent or less motivated groups. If everyone has equal access to an AI proxy, those barriers to participation could be lowered if not eliminated, and everyone could have a more equal say.
More generally, instead of our democratic systems of preference aggregation being constrained by humans’ capacity for complexity, it would be constrained by AI’s much greater capacity for complexity.
A natural extension of representative democracy. How alien would the world be if AIs were casting ballots and making decisions for us on a grand scale? We claim not very. Every denizen of a complex society has a lifetime of experience ceding the most important decisions governing their lives to a faceless machine.
Most of us want to believe that we can intervene when it matters. If an institution acts aberrantly, corruptly, or harmfully, we can apply political pressure to force a change in its behavior. We can vote out the elected officials accountable for them. We can use a ballot measure to force a policy change. We can picket outside the institution’s headquarters. This collective individual expression will bring about change. If not, democracy isn’t working.
AI could erode our democratic instinct. The most foundational concern pertaining to the implementation of AI proxies is that it could contribute to the atrophy of our innate capability for democracy. Aristotle envisioned democracy as a form of self-actualization. In this view, individual political preferences don’t magically appear in our heads, just waiting for an AI to discern them and then to advocate on our behalf. They are created through the process of education, discussion, and debate. The act of doing democracy makes democracy work. If we allow our AI proxies to fight it out then tell us the result, we lose out on that human interaction. Moreover, if we begin to listen and believe the AI’s version of our preferences, we could lose any sense of personal conviction. This could easily result in the same sort of feedback loops that propel people towards extreme views, or to entrench and polarize political parties. Replacing humans with AIs could result in worse outcomes in the long term, even if it led to better policy decisions immediately.
Of course, the outcome depends on the application. We might be okay with AIs driving our cars, and with our forgetting how to drive — just as few of us know how to drive a horse and buggy. We’re probably not okay with AI passing all laws while we humans forget how to do democracy. Maybe we will allow the AIs to help us deliberate and build consensus, but nothing more.
The actual decisions made by a government, even in the policy areas about which we care viscerally, are as remote to most of us as are the biochemical reactions in our cells. Parents care deeply about their children’s education, but most don’t have the time to read the texts assigned at public schools — much less to exercise influence over which texts those are. Citizens who care deeply about fairness and justice generally don’t have the faintest idea what cases are being tried inside their local county courthouse, much less what rules are being applied or the qualifications of the prosecutors and judges acting in their name. It’s simply impossible for citizens in a modern democracy to be informed of or to formulate opinions on most of the ways government functions. For generations up to the present day, the solution has been to entrust that authority to institutions. We even trust those institutions to choose technologies that help automate their work.
AI-enhanced democracy depends on citizens staying engaged. AI is just another step down that same path: another technological option for us to use in expressing our policy preferences and for institutions to use in executing their functions. If we can trust the AI, it might even be a positive step.
Most of these scenarios are too far into the future to predict clearly. What we want, right now, is to enhance human civic engagement, even as AI becomes essential as an assistive technology. In Rewiring Democracy, we explore more fully how society can maximize AI’s benefits for democracy while minimizing its risks. Overall, we are hopeful that this technology will provide a net boon to societies built upon the bedrock of individual liberties. But like everything in democracy, realizing those benefits will require we all be as informed and engaged as possible.
See things differently? AI Frontiers welcomes expert insights, thoughtful critiques, and fresh perspectives. Send us your pitch.
An unchecked autonomous arms race is eroding rules that distinguish civilians from combatants.
Classic arguments about AI risk imagined AIs pursuing arbitrary and hard-to-comprehend goals. Large Language Models aren't like that, but they pose risks of their own.