{"id":53696,"date":"2024-06-18T13:46:21","date_gmt":"2024-06-18T13:46:21","guid":{"rendered":"https:\/\/peymantaeidi.net\/stem-cell\/?p=53696"},"modified":"2024-06-18T14:36:40","modified_gmt":"2024-06-18T14:36:40","slug":"rethinking-democracy-for-the-age-of-ai","status":"publish","type":"post","link":"https:\/\/peymantaeidi.net\/stem-cell\/2024\/06\/18\/rethinking-democracy-for-the-age-of-ai\/","title":{"rendered":"Rethinking Democracy for the Age of AI"},"content":{"rendered":"<div class=\"entry-content clearfix\">\n<p>There is a lot written about technology\u2019s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don\u2019t align incentives well. And they are being hacked too effectively.<\/p>\n<p>At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies.<\/p>\n<p>We need to create new systems of governance that align incentives and are resilient against hacking \u0085 at every scale. From the individual all the way up to the whole of society.<\/p>\n<div class=\"code-block code-block-15\">\n<p><ins class=\"adsbygoogle\" data-ad-client=\"ca-pub-2091799172090865\" data-ad-slot=\"8723094367\" data-ad-format=\"auto\" data-full-width-responsive=\"true\"><\/ins>\n<\/div>\n<p>For this, I need you to drop your 20th century either\/or thinking. This is not about capitalism versus communism. It\u2019s not about democracy versus autocracy. It\u2019s not even about humans versus AI. It\u2019s something new, something we don\u2019t have a name for yet. And it\u2019s \u201cblue sky\u201d thinking, not even remotely considering what\u2019s feasible today.<\/p>\n<p>Throughout this talk, I want you to think of both democracy and capitalism as information systems. Socio-technical information systems. Protocols for making group decisions. Ones where different players have different incentives. These systems are vulnerable to hacking and need to be secured against those hacks.<\/p>\n<div class=\"code-block code-block-12\">\n<p>.ai-rotate {position: relative;}<br \/>\n.ai-rotate-hidden {visibility: hidden;}<br \/>\n.ai-rotate-hidden-2 {position: absolute; top: 0; left: 0; width: 100%; height: 100%;}<br \/>\n.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback, .ai-list-block, .ai-list-block-ip, .ai-list-block-filter {visibility: hidden; position: absolute; width: 50%; height: 1px; top: -1000px; z-index: -9999; margin: 0px!important;}<br \/>\n.ai-list-data, .ai-ip-data, .ai-filter-check, .ai-fallback {min-width: 1px;}<\/p>\n<div class=\"ai-rotate ai-unprocessed ai-timed-rotation ai-12-1\" data-info=\"WyIxMi0xIiw1XQ==\">\n<div class=\"ai-rotate-option\" data-index=\"3\" data-name=\"U2hvcnQ0\" data-time=\"MTA=\">\n<div class=\"custom-ad\">\n<div><a href=\"https:\/\/www.surveymonkey.com\/r\/DNKSWZQ\" target=\"_blank\" rel=\"noopener\"><\/a><\/div>\n<\/div>\n<\/div>\n<div class=\"ai-rotate-option\" data-index=\"4\" data-name=\"U2hvcnQ=\" data-time=\"MTA=\">\n<div class=\"custom-ad\">\n<div><a href=\"https:\/\/techstrongpodcasts.com\/?ref=in-article-ad-2&amp;utm_source=do&amp;utm_medium=referral&amp;utm_campaign=digital-banners-1&amp;utm_id=digital-banners-1\" target=\"_blank\" rel=\"noopener\"><img decoding=\"async\" src=\"https:\/\/peymantaeidi.net\/stem-cell\/wp-content\/uploads\/2024\/06\/Copy-of-Podcast-770-x-330-px.png\" alt=\"Techstrong Podcasts\" \/><\/a><\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<p>We security technologists have a lot of expertise in both secure system design and hacking. That\u2019s why we have something to add to this discussion.<\/p>\n<p>And finally, this is a work in progress. I\u2019m trying to create a framework for viewing governance. So think of this more as a foundation for discussion, rather than a road map to a solution. And I think by writing, and what you\u2019re going to hear is the current draft of my writing\u2014and my thinking. So everything is subject to change without notice.<\/p>\n<p>OK, so let\u2019s go.<\/p>\n<p>We all know about misinformation and how it affects democracy. And how propagandists have used it to advance their agendas. This is an ancient problem, amplified by information technologies. Social media platforms that prioritize engagement. \u201cFilter bubble\u201d segmentation. And technologies for honing persuasive messages.<\/p>\n<p>The problem ultimately stems from the way democracies use information to make policy decisions. Democracy is an information system that leverages collective intelligence to solve political problems. And then to collect feedback as to how well those solutions are working. This is different from autocracies that don\u2019t leverage collective intelligence for political decision making. Or have reliable mechanisms for collecting feedback from their populations.<\/p>\n<p>Those systems of democracy work well, but have no guardrails when fringe ideas become weaponized. That\u2019s what misinformation targets. The historical solution for this was supposed to be representation. This is currently failing in the US, partly because of gerrymandering, safe seats, only two parties, money in politics and our primary system. But the problem is more general.<\/p>\n<p>James Madison wrote about this in 1787, where he made two points. One, that representatives serve to filter popular opinions, limiting extremism. And two, that geographical dispersal makes it hard for those with extreme views to participate. It\u2019s hard to organize. To be fair, these limitations are both good and bad. In any case, current technology\u2014social media\u2014breaks them both.<\/p>\n<p>So this is a question: What does representation look like in a world without either filtering or geographical dispersal? Or, how do we avoid polluting 21st century democracy with prejudice, misinformation and bias. Things that impair both the problem solving and feedback mechanisms.<\/p>\n<p>That\u2019s the real issue. It\u2019s not about misinformation, it\u2019s about the incentive structure that makes misinformation a viable strategy.<\/p>\n<p>This is problem No. 1: Our systems have misaligned incentives. What\u2019s best for the small group often doesn\u2019t match what\u2019s best for the whole. And this is true across all sorts of individuals and group sizes.<\/p>\n<p>Now, historically, we have used misalignment to our advantage. Our current systems of governance leverage conflict to make decisions. The basic idea is that coordination is inefficient and expensive. Individual self-interest leads to local optimizations, which results in optimal group decisions.<\/p>\n<p>But this is also inefficient and expensive. The U.S. spent $14.5 billion on the 2020 presidential, senate and congressional elections. I don\u2019t even know how to calculate the cost in attention. That sounds like a lot of money, but step back and think about how the system works. The economic value of winning those elections are so great because that\u2019s how you impose your own incentive structure on the whole.<\/p>\n<p>More generally, the cost of our market economy is enormous. For example, $780 billion is spent world-wide annually on advertising. Many more billions are wasted on ventures that fail. And that\u2019s just a fraction of the total resources lost in a competitive market environment. And there are other collateral damages, which are spread non-uniformly across people.<\/p>\n<p>We have accepted these costs of capitalism\u2014and democracy\u2014because the inefficiency of central planning was considered to be worse. That might not be true anymore. The costs of conflict have increased. And the costs of coordination have decreased. Corporations demonstrate that large centrally planned economic units can compete in today\u2019s society. Think of Walmart or Amazon. If you compare GDP to market cap, Apple would be the eighth largest country on the planet. Microsoft would be the tenth.<\/p>\n<p>Another effect of these conflict-based systems is that they foster a scarcity mindset. And we have taken this to an extreme. We now think in terms of zero-sum politics. My party wins, your party loses. And winning next time can be more important than governing this time. We think in terms of zero-sum economics. My product\u2019s success depends on my competitors\u2019 failures. We think zero-sum internationally. Arms races and trade wars.<\/p>\n<p>Finally, conflict as a problem-solving tool might not give us good enough answers anymore. The underlying assumption is that if everyone pursues their own self interest, the result will approach everyone\u2019s best interest. That only works for simple problems and requires systemic oppression. We have lots of problems\u2014complex, wicked, global problems\u2014that don\u2019t work that way. We have interacting groups of problems that don\u2019t work that way. We have problems that require more efficient ways of finding optimal solutions.<\/p>\n<p>Note that there are multiple effects of these conflict-based systems. We have bad actors deliberately breaking the rules. And we have selfish actors taking advantage of insufficient rules.<\/p>\n<p>The latter is problem No. 2: What I refer to as \u201chacking\u201d in my latest book: \u201cA Hacker\u2019s Mind.\u201d Democracy is a socio-technical system. And all socio-technical systems can be hacked. By this I mean that the rules are either incomplete or inconsistent or outdated\u2014they have loopholes. And these can be used to subvert the rules. This is Peter Thiel subverting the Roth IRA to avoid paying taxes on $5 billion in income. This is gerrymandering, the filibuster, and must-pass legislation. Or tax loopholes, financial loopholes, regulatory loopholes.<\/p>\n<p>In today\u2019s society, the rich and powerful are just too good at hacking. And it is becoming increasingly impossible to patch our hacked systems. Because the rich use their power to ensure that the vulnerabilities don\u2019t get patched.<\/p>\n<p>This is bad for society, but it\u2019s basically the optimal strategy in our competitive governance systems. Their zero-sum nature makes hacking an effective, if parasitic, strategy. Hacking isn\u2019t a new problem, but today hacking scales better\u2014and is overwhelming the security systems in place to keep hacking in check. Think about gun regulations, climate change, opioids. And complex systems make this worse. These are all non-linear, tightly coupled, unrepeatable, path-dependent, adaptive, co-evolving systems.<\/p>\n<p>Now, add into this mix the risks that arise from new and dangerous technologies such as the internet or AI or synthetic biology. Or molecular nanotechnology, or nuclear weapons. Here, misaligned incentives and hacking can have catastrophic consequences for society.<\/p>\n<p>This is problem No. 3: Our systems of governance are not suited to our power level. They tend to be rights based, not permissions based. They\u2019re designed to be reactive, because traditionally there was only so much damage a single person could do.<\/p>\n<p>We do have systems for regulating dangerous technologies. Consider automobiles. They are regulated in many ways: drivers licenses + traffic laws + automobile regulations + road design. Compare this to aircrafts. Much more onerous licensing requirements, rules about flights, regulations on aircraft design and testing and a government agency overseeing it all day-to-day. Or pharmaceuticals, which have very complex rules surrounding everything around researching, developing, producing and dispensing. We have all these regulations because this stuff can kill you.<\/p>\n<p>The general term for this kind of thing is the \u201cprecautionary principle.\u201d When random new things can be deadly, we prohibit them unless they are specifically allowed.<\/p>\n<p>So what happens when a significant percentage of our jobs are as potentially damaging as a pilot\u2019s? Or even more damaging? When one person can affect everyone through synthetic biology. Or where a corporate decision can directly affect climate. Or something in AI or robotics. Things like the precautionary principle are no longer sufficient. Because breaking the rules can have global effects.<\/p>\n<p>And AI will supercharge hacking. We have created a series of non-interoperable systems that actually interact and AI will be able to figure out how to take advantage of more of those interactions: finding new tax loopholes or finding new ways to evade financial regulations. Creating \u201cmicro-legislation\u201d that surreptitiously benefits a particular person or group. And catastrophic risk means this is no longer tenable.<\/p>\n<p>So these are our core problems: misaligned incentives leading to too effective hacking of systems where the costs of getting it wrong can be catastrophic.<\/p>\n<p>Or, to put more words on it: Misaligned incentives encourage local optimization, and that\u2019s not a good proxy for societal optimization. This encourages hacking, which now generates greater harm than at any point in the past because the amount of damage that can result from local optimization is greater than at any point in the past.<\/p>\n<p>OK, let\u2019s get back to the notion of democracy as an information system. It\u2019s not just democracy: Any form of governance is an information system. It\u2019s a process that turns individual beliefs and preferences into group policy decisions. And, it uses feedback mechanisms to determine how well those decisions are working and then makes corrections accordingly.<\/p>\n<p>Historically, there are many ways to do this. We can have a system where no one\u2019s preference matters except the monarch\u2019s or the nobles\u2019 or the landowners\u2019. Sometimes the stronger army gets to decide\u2014or the people with the money.<\/p>\n<p>Or we could tally up everyone\u2019s preferences and do the thing that at least half of the people want. That\u2019s basically the promise of democracy today, at its ideal. Parliamentary systems are better, but only in the margins\u2014and it all feels kind of primitive. Lots of people write about how informationally poor elections are at aggregating individual preferences. It also results in all these misaligned incentives.<\/p>\n<p>I realize that democracy serves different functions. Peaceful transition of power, minimizing harm, equality, fair decision making, better outcomes. I am taking for granted that democracy is good for all those things. I\u2019m focusing on how we implement it.<\/p>\n<p>Modern democracy uses elections to determine who represents citizens in the decision-making process. And all sorts of other ways to collect information about what people think and want, and how well policies are working. These are opinion polls, public comments to rule-making, advocating, lobbying, protesting and so on. And, in reality, it\u2019s been hacked so badly that it does a terrible job of executing on the will of the people, creating further incentives to hack these systems.<\/p>\n<p>To be fair, the democratic republic was the best form of government that mid 18th century technology could invent. Because communications and travel were hard, we needed to choose one of us to go all the way over there and pass laws in our name. It was always a coarse approximation of what we wanted. And our principles, values, conceptions of fairness; our ideas about legitimacy and authority have evolved a lot since the mid 18th century. Even the notion of optimal group outcomes depended on who was considered in the group and who was out.<\/p>\n<p>But democracy is not a static system, it\u2019s an aspirational direction. One that really requires constant improvement. And our democratic systems have not evolved at the same pace that our technologies have. Blocking progress in democracy is itself a hack of democracy.<\/p>\n<p>Today we have much better technology that we can use in the service of democracy. Surely there are better ways to turn individual preferences into group policies. Now that communications and travel are easy. Maybe we should assign representation by age, or profession or randomly by birthday. Maybe we can invent an AI that calculates optimal policy outcomes based on everyone\u2019s preferences.<\/p>\n<p>Whatever we do, we need systems that better align individual and group incentives, at all scales. Systems designed to be resistant to hacking. And resilient to catastrophic risks. Systems that leverage cooperation more and conflict less. And are not zero-sum.<\/p>\n<p>Why can\u2019t we have a game where everybody wins?<\/p>\n<p>This has never been done before. It\u2019s not capitalism, it\u2019s not communism, it\u2019s not socialism. It\u2019s not current democracies or autocracies. It would be unlike anything we\u2019ve ever seen.<\/p>\n<p>Some of this comes down to how trust and cooperation work. When I wrote \u201cLiars and Outliers\u201d in 2012, I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.<\/p>\n<p>What I didn\u2019t appreciate is how different the first and last two are. Morals and reputation are both old biological systems of trust. They\u2019re person to person, based on human connection and cooperation. Laws\u2014and especially security technologies\u2014are newer systems of trust that force us to cooperate. They\u2019re socio-technical systems. They\u2019re more about confidence and control than they are about trust. And that allows them to scale better. Taxi driver used to be one of the country\u2019s most dangerous professions. Uber changed that through pervasive surveillance. My Uber driver and I don\u2019t know or trust each other, but the technology lets us both be confident that neither of us will cheat or attack each other. Both drivers and passengers compete for star rankings, which align local and global incentives.<\/p>\n<p>In today\u2019s tech-mediated world, we are replacing the rituals and behaviors of cooperation with security mechanisms that enforce compliance. And innate trust in people with compelled trust in processes and institutions. That scales better, but we lose the human connection. It\u2019s also expensive, and becoming even more so as our power grows. We need more security for these systems. And the results are much easier to hack.<\/p>\n<p>But here\u2019s the thing: Our informal human systems of trust are inherently unscalable. So maybe we have to rethink scale.<\/p>\n<p>Our 18th century systems of democracy were the only things that scaled with the technology of the time. Imagine a group of friends deciding where to have dinner. One is kosher, one is a vegetarian. They would never use a winner-take-all ballot to decide where to eat. But that\u2019s a system that scales to large groups of strangers.<\/p>\n<p>Scale matters more broadly in governance as well. We have global systems of political and economic competition. On the other end of the scale, the most common form of governance on the planet is socialism. It\u2019s how families function: people work according to their abilities, and resources are distributed according to their needs.<\/p>\n<p>I think we need governance that is both very large and very small. Our catastrophic technological risks are planetary-scale: climate change, AI, internet, bio-tech. And we have all the local problems inherent in human societies. We have very few problems anymore that are the size of France or Virginia. Some systems of governance work well on a local level but don\u2019t scale to larger groups. But now that we have more technology, we can make other systems of democracy scale.<\/p>\n<p>This runs headlong into historical norms about sovereignty. But that\u2019s already becoming increasingly irrelevant. The modern concept of a nation arose around the same time as the modern concept of democracy. But constituent boundaries are now larger and more fluid, and depend a lot on context. It makes no sense that the decisions about the \u201cdrug war\u201d\u2014or climate migration\u2014are delineated by nation. The issues are much larger than that. Right now there is no governance body with the right footprint to regulate Internet platforms like Facebook. Which has more users world-wide than Christianity.<\/p>\n<p>We also need to rethink growth. Growth only equates to progress when the resources necessary to grow are cheap and abundant. Growth is often extractive. And at the expense of something else. Growth is how we fuel our zero-sum systems. If the pie gets bigger, it\u2019s OK that we waste some of the pie in order for it to grow. That doesn\u2019t make sense when resources are scarce and expensive. Growing the pie can end up costing more than the increase in pie size. Sustainability makes more sense. And a metric more suited to the environment we\u2019re in right now.<\/p>\n<p>Finally, agility is also important. Back to systems theory, governance is an attempt to control complex systems with complicated systems. This gets harder as the systems get larger and more complex. And as catastrophic risk raises the costs of getting it wrong.<\/p>\n<p>In recent decades, we have replaced the richness of human interaction with economic models. Models that turn everything into markets. Market fundamentalism scaled better, but the social cost was enormous. A lot of how we think and act isn\u2019t captured by those models. And those complex models turn out to be very hackable. Increasingly so at larger scales.<\/p>\n<p>Lots of people have written about the speed of technology versus the speed of policy. To relate it to this talk: Our human systems of governance need to be compatible with the technologies they\u2019re supposed to govern. If they\u2019re not, eventually the technological systems will replace the governance systems. Think of Twitter as the de facto arbiter of free speech.<\/p>\n<p>This means that governance needs to be agile. And able to quickly react to changing circumstances. Imagine a court saying to Peter Thiel: \u201cSorry. That\u2019s not how Roth IRAs are supposed to work. Now give us our tax on that $5B.\u201d This is also essential in a technological world: one that is moving at unprecedented speeds, where getting it wrong can be catastrophic and one that is resource constrained. Agile patching is how we maintain security in the face of constant hacking\u2014and also red teaming. In this context, both journalism and civil society are important checks on government.<\/p>\n<p>I want to quickly mention two ideas for democracy, one old and one new. I\u2019m not advocating for either. I\u2019m just trying to open you up to new possibilities. The first is sortition. These are citizen assemblies brought together to study an issue and reach a policy decision. They were popular in ancient Greece and Renaissance Italy, and are increasingly being used today in Europe. The only vestige of this in the U.S. is the jury. But you can also think of trustees of an organization. The second idea is liquid democracy. This is a system where everybody has a proxy that they can transfer to someone else to vote on their behalf. Representatives hold those proxies, and their vote strength is proportional to the number of proxies they have. We have something like this in corporate proxy governance.<\/p>\n<p>Both of these are algorithms for converting individual beliefs and preferences into policy decisions. Both of these are made easier through 21st century technologies. They are both democracies, but in new and different ways. And while they\u2019re not immune to hacking, we can design them from the beginning with security in mind.<\/p>\n<p>This points to technology as a key component of any solution. We know how to use technology to build systems of trust. Both the informal biological kind and the formal compliance kind. We know how to use technology to help align incentives, and to defend against hacking.<\/p>\n<p>We talked about AI hacking; AI can also be used to defend against hacking, finding vulnerabilities in computer code, finding tax loopholes before they become law and uncovering attempts at surreptitious micro-legislation.<\/p>\n<p>Think back to democracy as an information system. Can AI techniques be used to uncover our political preferences and turn them into policy outcomes, get feedback and then iterate? This would be more accurate than polling. And maybe even elections. Can an AI act as our representative? Could it do a better job than a human at voting the preferences of its constituents?<\/p>\n<p>Can we have an AI in our pocket that votes on our behalf, thousands of times a day, based on the preferences it infers we have. Or maybe based on the preferences it infers we would have if we read up on the issues and weren\u2019t swayed by misinformation. It\u2019s just another algorithm for converting individual preferences into policy decisions. And it certainly solves the problem of people not paying attention to politics.<\/p>\n<p>But slow down: This is rapidly devolving into technological solutionism. And we know that doesn\u2019t work.<\/p>\n<p>A general question to ask here is when do we allow algorithms to make decisions for us? Sometimes it\u2019s easy. I\u2019m happy to let my thermostat automatically turn my heat on and off or to let an AI drive a car or optimize the traffic lights in a city. I\u2019m less sure about an AI that sets tax rates, or corporate regulations or foreign policy. Or an AI that tells us that it can\u2019t explain why, but strongly urges us to declare war\u2014right now. Each of these is harder because they are more complex systems: non-local, multi-agent, long-duration and so on. I also want any AI that works on my behalf to be under my control. And not controlled by a large corporate monopoly that allows me to use it.<\/p>\n<p>And learned helplessness is an important consideration. We\u2019re probably OK with no longer needing to know how to drive a car. But we don\u2019t want a system that results in us forgetting how to run a democracy. Outcomes matter here, but so do mechanisms. Any AI system should engage individuals in the process of democracy, not replace them.<\/p>\n<p>So while an AI that does all the hard work of governance might generate better policy outcomes. There is social value in a human-centric political system, even if it is less efficient. And more technologically efficient preference collection might not be better, even if it is more accurate.<\/p>\n<p>Procedure and substance need to work together. There is a role for AI in decision making: moderating discussions, highlighting agreements and disagreements helping people reach consensus. But it is an independent good that we humans remain engaged in\u2014and in charge of\u2014the process of governance.<\/p>\n<p>And that value is critical to making democracy function. Democratic knowledge isn\u2019t something that\u2019s out there to be gathered: It\u2019s dynamic; it gets produced through the social processes of democracy. The term of art is \u201cpreference formation.\u201d We\u2019re not just passively aggregating preferences, we create them through learning, deliberation, negotiation and adaptation. Some of these processes are cooperative and some of these are competitive. Both are important. And both are needed to fuel the information system that is democracy.<\/p>\n<p>We\u2019re never going to remove conflict and competition from our political and economic systems. Human disagreement isn\u2019t just a surface feature; it goes all the way down. We have fundamentally different aspirations. We want different ways of life. I talked about optimal policies. Even that notion is contested: optimal for whom, with respect to what, over what time frame? Disagreement is fundamental to democracy. We reach different policy conclusions based on the same information. And it\u2019s the process of making all of this work that makes democracy possible.<\/p>\n<p>So we actually can\u2019t have a game where everybody wins. Our goal has to be to accommodate plurality, to harness conflict and disagreement, and not to eliminate it. While, at the same time, moving from a player-versus-player game to a player-versus-environment game.<\/p>\n<p>There\u2019s a lot missing from this talk. Like what these new political and economic governance systems should look like. Democracy and capitalism are intertwined in complex ways, and I don\u2019t think we can recreate one without also recreating the other. My comments about agility lead to questions about authority and how that interplays with everything else. And how agility can be hacked as well. We haven\u2019t even talked about tribalism in its many forms. In order for democracy to function, people need to care about the welfare of strangers who are not like them. We haven\u2019t talked about rights or responsibilities. What is off limits to democracy is a huge discussion. And Butterin\u2019s trilemma also matters here: that you can\u2019t simultaneously build systems that are secure, distributed, and scalable.<\/p>\n<p>I also haven\u2019t given a moment\u2019s thought to how to get from here to there. Everything I\u2019ve talked about\u2014incentives, hacking, power, complexity\u2014also applies to any transition systems. But I think we need to have unconstrained discussions about what we\u2019re aiming for. If for no other reason than to question our assumptions. And to imagine the possibilities. And while a lot of the AI parts are still science fiction, they\u2019re not far-off science fiction.<\/p>\n<p>I know we can\u2019t clear the board and build a new governance structure from scratch. But maybe we can come up with ideas that we can bring back to reality.<\/p>\n<p>To summarize, the systems of governance we designed at the start of the Industrial Age are ill-suited to the Information Age. Their incentive structures are all wrong. They\u2019re insecure and they\u2019re wasteful. They don\u2019t generate optimal outcomes. At the same time we\u2019re facing catastrophic risks to society due to powerful technologies. And a vastly constrained resource environment. We need to rethink our systems of governance; more cooperation and less competition and at scales that are suited to today\u2019s problems and today\u2019s technologies. With security and precautions built in. What comes after democracy might very well be more democracy, but it will look very different.<\/p>\n<p>This feels like a challenge worthy of our security expertise.<\/p>\n<p><em>This text is the transcript from a <a href=\"https:\/\/www.rsaconference.com\/library\/presentation\/usa\/2023\/Cybersecurity%20Thinking%20to%20Reinvent%20Democracy\">keynote speech<\/a> delivered during the RSA Conference in San Francisco on April 25, 2023. It was previously published in <a href=\"https:\/\/cyberscoop.com\/rethinking-democracy-ai\/\">Cyberscoop<\/a>. I thought I posted it to my blog and Crypto-Gram last year, but it seems that I didn\u2019t.<\/em><\/p>\n<p class=\"syndicated-attribution\">*** This is a Security Bloggers Network syndicated blog from <a href=\"https:\/\/www.schneier.com\">Schneier on Security<\/a> authored by <a href=\"https:\/\/securityboulevard.com\/author\/0\/\" title=\"Read other posts by Bruce Schneier\">Bruce Schneier<\/a>. Read the original post at: <a href=\"https:\/\/www.schneier.com\/blog\/archives\/2024\/06\/rethinking-democracy-for-the-age-of-ai.html\">https:\/\/www.schneier.com\/blog\/archives\/2024\/06\/rethinking-democracy-for-the-age-of-ai.html<\/a> <\/p>\n<\/div>\n<div class=\"entry-content clearfix\">\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>There is a lot written about technology\u2019s threats to democracy. Polarization. Artificial intelligence. The concentration<\/p>\n","protected":false},"author":1,"featured_media":53698,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/posts\/53696"}],"collection":[{"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/comments?post=53696"}],"version-history":[{"count":3,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/posts\/53696\/revisions"}],"predecessor-version":[{"id":53701,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/posts\/53696\/revisions\/53701"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/media\/53698"}],"wp:attachment":[{"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/media?parent=53696"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/categories?post=53696"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/peymantaeidi.net\/stem-cell\/wp-json\/wp\/v2\/tags?post=53696"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}