Rethinking democracy for the age of AI

There’s a lot written about expertise’s threats to democracy. Polarization. Synthetic intelligence. The focus of wealth and energy. I’ve a extra common story: The political and financial programs of governance that have been created within the mid-18th century are poorly suited to the twenty first century. They don’t align incentives properly. And they’re being hacked too successfully.
On the identical time, the price of these hacked programs has by no means been better, throughout all human historical past. Now we have develop into too highly effective as a species. And our programs can not sustain with fast-changing disruptive applied sciences.
We have to create new programs of governance that align incentives and are resilient towards hacking … at each scale. From the person all the best way as much as the entire of society.
This textual content is the transcript from a keynote speech delivered in the course of the RSA Convention in San Francisco on April 25, 2023.
For this, I want you to drop your twentieth century both/or considering. This isn’t about capitalism versus communism. It’s not about democracy versus autocracy. It’s not even about people versus AI. It’s one thing new, one thing we don’t have a reputation for but. And it’s “blue sky” considering, not even remotely contemplating what’s possible immediately.
All through this discuss, I would like you to consider each democracy and capitalism as info programs. Socio-technical info programs. Protocols for making group choices. Ones the place totally different gamers have totally different incentives. These programs are weak to hacking and should be secured towards these hacks.
We safety technologists have a variety of experience in each safe system design and hacking. That’s why now we have one thing so as to add to this dialogue.
And at last, this can be a work in progress. I’m making an attempt to create a framework for viewing governance. So consider this extra as a basis for dialogue, fairly than a street map to an answer. And I believe by writing, and what you’re going to listen to is the present draft of my writing — and my considering. So all the pieces is topic to vary with out discover.
OK, so let’s go.
Everyone knows about misinformation and the way it impacts democracy. And the way propagandists have used it to advance their agendas. That is an historical downside, amplified by info applied sciences. Social media platforms that prioritize engagement. “Filter bubble” segmentation. And applied sciences for honing persuasive messages.
The issue finally stems from the best way democracies use info to make coverage choices. Democracy is an info system that leverages collective intelligence to resolve political issues. After which to gather suggestions as to how properly these options are working. That is totally different from autocracies that don’t leverage collective intelligence for political choice making. Or have dependable mechanisms for amassing suggestions from their populations.
These programs of democracy work properly, however don’t have any guardrails when fringe concepts develop into weaponized. That’s what misinformation targets. The historic answer for this was alleged to be illustration. That is at present failing within the US, partly due to gerrymandering, secure seats, solely two events, cash in politics and our main system. However the issue is extra common.
James Madison wrote about this in 1787, the place he made two factors. One, that representatives serve to filter well-liked opinions, limiting extremism. And two, that geographical dispersal makes it exhausting for these with excessive views to take part. It’s exhausting to prepare. To be honest, these limitations are each good and dangerous. In any case, present expertise — social media — breaks them each.
So this can be a query: What does illustration appear to be in a world with out both filtering or geographical dispersal? Or, how will we keep away from polluting twenty first century democracy with prejudice, misinformation and bias. Issues that impair each the issue fixing and suggestions mechanisms.
That’s the true subject. It’s not about misinformation, it’s concerning the incentive construction that makes misinformation a viable technique.
That is downside No. 1: Our programs have misaligned incentives. What’s greatest for the small group typically doesn’t match what’s greatest for the entire. And that is true throughout all types of people and group sizes.
Now, traditionally, now we have used misalignment to our benefit. Our present programs of governance leverage battle to make choices. The essential thought is that coordination is inefficient and costly. Particular person self-interest results in native optimizations, which ends up in optimum group choices.
However that is additionally inefficient and costly. The U.S. spent $14.5 billion on the 2020 presidential, senate and congressional elections. I don’t even know learn how to calculate the price in consideration. That seems like some huge cash, however step again and take into consideration how the system works. The financial worth of successful these elections are so nice as a result of that’s the way you impose your personal incentive construction on the entire.
Extra usually, the price of our market financial system is big. For instance, $780 billion is spent world-wide yearly on promoting. Many extra billions are wasted on ventures that fail. And that’s only a fraction of the full assets misplaced in a aggressive market atmosphere. And there are different collateral damages, that are unfold non-uniformly throughout folks.
Now we have accepted these prices of capitalism — and democracy — as a result of the inefficiency of central planning was thought of to be worse. That may not be true anymore. The prices of battle have elevated. And the prices of coordination have decreased. Companies display that enormous centrally deliberate financial models can compete in immediately’s society. Consider Walmart or Amazon. When you examine GDP to market cap, Apple can be the eighth largest nation on the planet. Microsoft can be the tenth.
Democracy is a socio-technical system. And all socio-technical programs could be hacked.
One other impact of those conflict-based programs is that they foster a shortage mindset. And now we have taken this to an excessive. We now assume when it comes to zero-sum politics. My social gathering wins, your social gathering loses. And successful subsequent time could be extra necessary than governing this time. We expect when it comes to zero-sum economics. My product’s success is determined by my rivals’ failures. We expect zero-sum internationally. Arms races and commerce wars.
Lastly, battle as a problem-solving device won’t give us ok solutions anymore. The underlying assumption is that if everybody pursues their very own self curiosity, the end result will strategy everybody’s greatest curiosity. That solely works for easy issues and requires systemic oppression. Now we have a number of issues — complicated, depraved, international issues — that don’t work that approach. Now we have interacting teams of issues that don’t work that approach. Now we have issues that require extra environment friendly methods of discovering optimum options.
Word that there are a number of results of those conflict-based programs. Now we have dangerous actors intentionally breaking the principles. And now we have egocentric actors profiting from inadequate guidelines.
The latter is downside No. 2: What I seek advice from as “hacking” in my newest ebook: “A Hacker’s Thoughts.” Democracy is a socio-technical system. And all socio-technical programs could be hacked. By this I imply that the principles are both incomplete or inconsistent or outdated – they’ve loopholes. And these can be utilized to subvert the principles. That is Peter Thiel subverting the Roth IRA to keep away from paying taxes on $5 billion in earnings. That is gerrymandering, the filibuster, and must-pass laws. Or tax loopholes, monetary loopholes, regulatory loopholes.
In immediately’s society, the wealthy and highly effective are simply too good at hacking. And it’s changing into more and more unimaginable to patch our hacked programs. As a result of the wealthy use their energy to make sure that the vulnerabilities don’t get patched.
That is dangerous for society, however it’s principally the optimum technique in our aggressive governance programs. Their zero-sum nature makes hacking an efficient, if parasitic, technique. Hacking isn’t a brand new downside, however immediately hacking scales higher – and is overwhelming the safety programs in place to maintain hacking in examine. Take into consideration gun laws, local weather change, opioids. And sophisticated programs make this worse. These are all non-linear, tightly coupled, unrepeatable, path-dependent, adaptive, co-evolving programs.
Now, add into this combine the dangers that come up from new and harmful applied sciences such because the web or AI or artificial biology. Or molecular nanotechnology, or nuclear weapons. Right here, misaligned incentives and hacking can have catastrophic penalties for society.
That is downside No. 3: Our programs of governance will not be suited to our energy degree. They are usually rights based mostly, not permissions based mostly. They’re designed to be reactive, as a result of historically there was solely a lot harm a single individual may do.
We do have programs for regulating harmful applied sciences. Think about cars. They’re regulated in some ways: drivers licenses + visitors legal guidelines + vehicle laws + street design. Examine this to aircrafts. Way more onerous licensing necessities, guidelines about flights, laws on plane design and testing and a authorities company overseeing all of it day-to-day. Or prescribed drugs, which have very complicated guidelines surrounding all the pieces round researching, creating, producing and dishing out. Now we have all these laws as a result of these items can kill you.
The overall time period for this sort of factor is the “precautionary precept.” When random new issues could be lethal, we prohibit them except they’re particularly allowed.
At the moment now we have significantly better expertise that we are able to use within the service of democracy. Certainly there are higher methods to show particular person preferences into group insurance policies.
So what occurs when a big proportion of our jobs are as doubtlessly damaging as a pilot’s? Or much more damaging? When one individual can have an effect on everybody by means of artificial biology. Or the place a company choice can straight have an effect on local weather. Or one thing in AI or robotics. Issues just like the precautionary precept are now not ample. As a result of breaking the principles can have international results.
And AI will supercharge hacking. Now we have created a collection of non-interoperable programs that really work together and AI will be capable of determine learn how to make the most of extra of these interactions: discovering new tax loopholes or discovering new methods to evade monetary laws. Creating “micro-legislation” that surreptitiously advantages a selected individual or group. And catastrophic danger means that is now not tenable.
So these are our core issues: misaligned incentives resulting in too efficient hacking of programs the place the prices of getting it flawed could be catastrophic.
Or, to place extra phrases on it: Misaligned incentives encourage native optimization, and that’s not a superb proxy for societal optimization. This encourages hacking, which now generates better hurt than at any level up to now as a result of the quantity of injury that may end result from native optimization is larger than at any level up to now.
OK, let’s get again to the notion of democracy as an info system. It’s not simply democracy: Any type of governance is an info system. It’s a course of that turns particular person beliefs and preferences into group coverage choices. And, it makes use of suggestions mechanisms to find out how properly these choices are working after which makes corrections accordingly.
Traditionally, there are a lot of methods to do that. We will have a system the place nobody’s choice issues besides the monarch’s or the nobles’ or the landowners’. Typically the stronger military will get to resolve — or the folks with the cash.
Or we may tally up everybody’s preferences and do the factor that no less than half of the folks need. That’s principally the promise of democracy immediately, at its best. Parliamentary programs are higher, however solely within the margins — and all of it feels sort of primitive. Plenty of folks write about how informationally poor elections are at aggregating particular person preferences. It additionally leads to all these misaligned incentives.
I understand that democracy serves totally different features. Peaceable transition of energy, minimizing hurt, equality, honest choice making, higher outcomes. I’m taking with no consideration that democracy is sweet for all these issues. I’m specializing in how we implement it.
Trendy democracy makes use of elections to find out who represents residents within the decision-making course of. And all types of different methods to gather details about what folks assume and need, and the way properly insurance policies are working. These are opinion polls, public feedback to rule-making, advocating, lobbying, protesting and so forth. And, in actuality, it’s been hacked so badly that it does a horrible job of executing on the desire of the folks, creating additional incentives to hack these programs.
To be honest, the democratic republic was the very best type of authorities that mid 18th century expertise may invent. As a result of communications and journey have been exhausting, we wanted to decide on one in every of us to go all the best way over there and move legal guidelines in our identify. It was all the time a rough approximation of what we wished. And our rules, values, conceptions of equity; our concepts about legitimacy and authority have developed quite a bit because the mid 18th century. Even the notion of optimum group outcomes trusted who was thought of within the group and who was out.
However democracy shouldn’t be a static system, it’s an aspirational path. One that actually requires fixed enchancment. And our democratic programs haven’t developed on the identical tempo that our applied sciences have. Blocking progress in democracy is itself a hack of democracy.
At the moment now we have significantly better expertise that we are able to use within the service of democracy. Certainly there are higher methods to show particular person preferences into group insurance policies. Now that communications and journey are simple. Possibly we must always assign illustration by age, or career or randomly by birthday. Possibly we are able to invent an AI that calculates optimum coverage outcomes based mostly on everybody’s preferences.
No matter we do, we’d like programs that higher align particular person and group incentives, in any respect scales. Techniques designed to be proof against hacking. And resilient to catastrophic dangers. Techniques that leverage cooperation extra and battle much less. And will not be zero-sum.
Why can’t now we have a sport the place all people wins?
This has by no means been carried out earlier than. It’s not capitalism, it’s not communism, it’s not socialism. It’s not present democracies or autocracies. It will be not like something we’ve ever seen.
A few of this comes right down to how belief and cooperation work. Once I wrote “Liars and Outliers” in 2012, I wrote about 4 programs for enabling belief: our innate morals, concern about our reputations, the legal guidelines we stay below and safety applied sciences that constrain our conduct. I wrote about how the primary two are extra casual than the final two. And the way the final two scale higher, and permit for bigger and extra complicated societies. They allow cooperation amongst strangers.
Our human programs of governance should be appropriate with the applied sciences they’re supposed to control. In the event that they’re not, finally the technological programs will substitute the governance programs.

What I didn’t respect is how totally different the primary and final two are. Morals and status are each outdated organic programs of belief. They’re individual to individual, based mostly on human connection and cooperation. Legal guidelines – and particularly safety applied sciences – are newer programs of belief that power us to cooperate. They’re socio-technical programs. They’re extra about confidence and management than they’re about belief. And that enables them to scale higher. Taxi driver was once one of many nation’s most harmful professions. Uber modified that by means of pervasive surveillance. My Uber driver and I don’t know or belief one another, however the expertise lets us each be assured that neither of us will cheat or assault one another. Each drivers and passengers compete for star rankings, which align native and international incentives.
In immediately’s tech-mediated world, we’re changing the rituals and behaviors of cooperation with safety mechanisms that implement compliance. And innate belief in folks with compelled belief in processes and establishments. That scales higher, however we lose the human connection. It’s additionally costly, and changing into much more in order our energy grows. We want extra safety for these programs. And the outcomes are a lot simpler to hack.
However right here’s the factor: Our casual human programs of belief are inherently unscalable. So perhaps now we have to rethink scale.
Our 18th century programs of democracy have been the one issues that scaled with the expertise of the time. Think about a gaggle of mates deciding the place to have dinner. One is kosher, one is a vegetarian. They might by no means use a winner-take-all poll to resolve the place to eat. However that’s a system that scales to giant teams of strangers.
Scale issues extra broadly in governance as properly. Now we have international programs of political and financial competitors. On the opposite finish of the dimensions, the most typical type of governance on the planet is socialism. It’s how households operate: folks work based on their skills, and assets are distributed based on their wants.
I believe we’d like governance that’s each very giant and really small. Our catastrophic technological dangers are planetary-scale: local weather change, AI, web, bio-tech. And now we have all of the native issues inherent in human societies. Now we have only a few issues anymore which might be the dimensions of France or Virginia. Some programs of governance work properly on a neighborhood degree however don’t scale to bigger teams. However now that now we have extra expertise, we are able to make different programs of democracy scale.
This runs headlong into historic norms about sovereignty. However that’s already changing into more and more irrelevant. The fashionable idea of a nation arose across the identical time as the fashionable idea of democracy. However constituent boundaries are actually bigger and extra fluid, and rely quite a bit on context. It is senseless that the selections concerning the “drug conflict” — or local weather migration—are delineated by nation. The problems are a lot bigger than that. Proper now there isn’t a governance physique with the precise footprint to manage Web platforms like Fb. Which has extra customers world-wide than Christianity.
We additionally must rethink development. Development solely equates to progress when the assets essential to develop are low cost and ample. Development is commonly extractive. And on the expense of one thing else. Development is how we gas our zero-sum programs. If the pie will get larger, it’s OK that we waste a number of the pie to ensure that it to develop. That doesn’t make sense when assets are scarce and costly. Rising the pie can find yourself costing greater than the rise in pie measurement. Sustainability makes extra sense. And a metric extra suited to the atmosphere we’re in proper now.
Lastly, agility can be necessary. Again to programs principle, governance is an try to manage complicated programs with sophisticated programs. This will get more durable because the programs get bigger and extra complicated. And as catastrophic danger raises the prices of getting it flawed.
In latest many years, now we have changed the richness of human interplay with financial fashions. Fashions that flip all the pieces into markets. Market fundamentalism scaled higher, however the social price was monumental. Plenty of how we predict and act isn’t captured by these fashions. And people complicated fashions turn into very hackable. More and more so at bigger scales.
Plenty of folks have written concerning the velocity of expertise versus the velocity of coverage. To narrate it to this discuss: Our human programs of governance should be appropriate with the applied sciences they’re supposed to control. In the event that they’re not, finally the technological programs will substitute the governance programs. Consider Twitter because the de facto arbiter of free speech.
Which means governance must be agile. And in a position to shortly react to altering circumstances. Think about a courtroom saying to Peter Thiel: “Sorry. That’s not how Roth IRAs are alleged to work. Now give us our tax on that $5B.” That is additionally important in a technological world: one that’s transferring at unprecedented speeds, the place getting it flawed could be catastrophic and one that’s useful resource constrained. Agile patching is how we preserve safety within the face of fixed hacking — and likewise purple teaming. On this context, each journalism and civil society are necessary checks on authorities.
I need to shortly point out two concepts for democracy, one outdated and one new. I’m not advocating for both. I’m simply making an attempt to open you as much as new potentialities. The primary is sortition. These are citizen assemblies introduced collectively to check a difficulty and attain a coverage choice. They have been well-liked in historical Greece and Renaissance Italy, and are more and more getting used immediately in Europe. The one vestige of this within the U.S. is the jury. However you may as well consider trustees of a corporation. The second thought is liquid democracy. It is a system the place all people has a proxy that they will switch to another person to vote on their behalf. Representatives maintain these proxies, and their vote power is proportional to the variety of proxies they’ve. Now we have one thing like this in company proxy governance.
Each of those are algorithms for changing particular person beliefs and preferences into coverage choices. Each of those are made simpler by means of twenty first century applied sciences. They’re each democracies, however in new and alternative ways. And whereas they’re not proof against hacking, we are able to design them from the start with safety in thoughts.
Can AI methods be used to uncover our political preferences and switch them into coverage outcomes, get suggestions after which iterate? This could be extra correct than polling. And perhaps even elections.

This factors to expertise as a key part of any answer. We all know learn how to use expertise to construct programs of belief. Each the casual organic variety and the formal compliance variety. We all know learn how to use expertise to assist align incentives, and to defend towards hacking.
We talked about AI hacking; AI can be used to defend towards hacking, discovering vulnerabilities in pc code, discovering tax loopholes earlier than they develop into legislation and uncovering makes an attempt at surreptitious micro-legislation.
Suppose again to democracy as an info system. Can AI methods be used to uncover our political preferences and switch them into coverage outcomes, get suggestions after which iterate? This could be extra correct than polling. And perhaps even elections. Can an AI act as our consultant? Might it do a greater job than a human at voting the preferences of its constituents?
Can now we have an AI in our pocket that votes on our behalf, 1000’s of instances a day, based mostly on the preferences it infers now we have. Or perhaps based mostly on the preferences it infers we’d have if we learn up on the problems and weren’t swayed by misinformation. It’s simply one other algorithm for changing particular person preferences into coverage choices. And it definitely solves the issue of individuals not taking note of politics.
However decelerate: That is quickly devolving into technological solutionism. And we all know that doesn’t work.
A common query to ask right here is when will we permit algorithms to make choices for us? Typically it’s simple. I’m completely satisfied to let my thermostat robotically flip my warmth on and off or to let an AI drive a automotive or optimize the visitors lights in a metropolis. I’m much less positive about an AI that units tax charges, or company laws or overseas coverage. Or an AI that tells us that it could actually’t clarify why, however strongly urges us to declare conflict — proper now. Every of those is more durable as a result of they’re extra complicated programs: non-local, multi-agent, long-duration and so forth. I additionally need any AI that works on my behalf to be below my management. And never managed by a big company monopoly that enables me to make use of it.
And realized helplessness is a vital consideration. We’re most likely OK with now not needing to know learn how to drive a automotive. However we don’t need a system that leads to us forgetting learn how to run a democracy. Outcomes matter right here, however so do mechanisms. Any AI system ought to have interaction people within the technique of democracy, not substitute them.
So whereas an AI that does all of the exhausting work of governance may generate higher coverage outcomes. There’s social worth in a human-centric political system, even whether it is much less environment friendly. And extra technologically environment friendly choice assortment won’t be higher, even whether it is extra correct.
The programs of governance we designed in the beginning of the Industrial Age are ill-suited to the Info Age. Their incentive constructions are all flawed. They’re insecure and so they’re wasteful. They don’t generate optimum outcomes.
Process and substance must work collectively. There’s a position for AI in choice making: moderating discussions, highlighting agreements and disagreements serving to folks attain consensus. However it’s an impartial good that we people stay engaged in—and answerable for—the method of governance.
And that worth is essential to creating democracy operate. Democratic information isn’t one thing that’s on the market to be gathered: It’s dynamic; it will get produced by means of the social processes of democracy. The time period of artwork is “choice formation.” We’re not simply passively aggregating preferences, we create them by means of studying, deliberation, negotiation and adaptation. A few of these processes are cooperative and a few of these are aggressive. Each are necessary. And each are wanted to gas the knowledge system that’s democracy.
We’re by no means going to take away battle and competitors from our political and financial programs. Human disagreement isn’t only a floor characteristic; it goes all the best way down. Now we have basically totally different aspirations. We wish alternative ways of life. I talked about optimum insurance policies. Even that notion is contested: optimum for whom, with respect to what, over what time-frame? Disagreement is key to democracy. We attain totally different coverage conclusions based mostly on the identical info. And it’s the method of creating all of this work that makes democracy doable.
So we really can’t have a sport the place all people wins. Our purpose must be to accommodate plurality, to harness battle and disagreement, and to not eradicate it. Whereas, on the identical time, transferring from a player-versus-player sport to a player-versus-environment sport.
There’s quite a bit lacking from this discuss. Like what these new political and financial governance programs ought to appear to be. Democracy and capitalism are intertwined in complicated methods, and I don’t assume we are able to recreate one with out additionally recreating the opposite. My feedback about agility result in questions on authority and the way that interplays with all the pieces else. And the way agility could be hacked as properly. We haven’t even talked about tribalism in its many kinds. To ensure that democracy to operate, folks must care concerning the welfare of strangers who will not be like them. We haven’t talked about rights or tasks. What’s off limits to democracy is a big dialogue. And Butterin’s trilemma additionally issues right here: which you could’t concurrently construct programs which might be safe, distributed, and scalable.
I additionally haven’t given a second’s thought to learn how to get from right here to there. All the pieces I’ve talked about — incentives, hacking, energy, complexity — additionally applies to any transition programs. However I believe we have to have unconstrained discussions about what we’re aiming for. If for no different cause than to query our assumptions. And to think about the chances. And whereas a variety of the AI components are nonetheless science fiction, they’re not far-off science fiction.
I do know we are able to’t clear the board and construct a brand new governance construction from scratch. However perhaps we are able to provide you with concepts that we are able to deliver again to actuality.
To summarize, the programs of governance we designed in the beginning of the Industrial Age are ill-suited to the Info Age. Their incentive constructions are all flawed. They’re insecure and so they’re wasteful. They don’t generate optimum outcomes. On the identical time we’re dealing with catastrophic dangers to society resulting from highly effective applied sciences. And a vastly constrained useful resource atmosphere. We have to rethink our programs of governance; extra cooperation and fewer competitors and at scales which might be suited to immediately’s issues and immediately’s applied sciences. With safety and precautions in-built. What comes after democracy may very properly be extra democracy, however it would look very totally different.
This seems like a problem worthy of our safety experience.
Bruce Schneier is the creator of “A Hacker’s Thoughts: How the Highly effective Bend Society’s Guidelines, and Bend them Again,” a Lecturer in public coverage on the Harvard Kennedy Faculty and chief of safety structure at Inrupt, Inc.