Skip to content

Rewiring Democracy: Bruce Schneier and Nathan E. Sanders on AI, Power, and the Future of Governance

2025-11-26

Author(s): Scott Douglas Jacobsen

Publication (Outlet/Website): The Good Men Project

Publication Date (yyyy/mm/dd): 2025/09/25

 Bruce Schneier is an internationally renowned security technologist and the New York Times bestselling author of fourteen books, including Data and Goliath and A Hacker’s Mind. He is a Lecturer at the Harvard Kennedy School, a board member of EFF, and Chief of Security Architecture at Inrupt, Inc. Find him on X (@schneierblog, 142.2k) and his blog (schneier.com, 250k). 

Nathan E. Sanders is a data scientist focused on making policymaking more participatory. His research spans machine learning, astrophysics, public health, environmental justice, and more. He has served in fellowships and the Massachusetts legislature and the Berkman-Klein Center at Harvard University. You can find his writing on AI and democracy in publications like The New York Times and The Atlantic and at his website, nsanders.me.

Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship is Bruce Schneier and Nathan E. Sanders’ field guide to governing in the algorithmic age. Drawing on real projects and policy debates, it maps how AI is already reshaping lawmaking, regulation, courts, and civic participation—and shows how to bend the tech toward equity, transparency, and public accountability. Rather than dystopian panic or hype, the authors offer a pragmatic roadmap: reform and regulate AI, resist harmful deployments, responsibly use AI to improve services, and renovate democratic institutions so power is distributed, not concentrated. Publication: MIT Press, October 21, 2025, globally.

Scott Douglas Jacobsen: You argue AI magnifies power. Which concrete policy levers best ensure diffusion?

Nathan E. Sanders: AI is fundamentally a power magnifying technology, It takes the command of one person and executes on it with greater speed, scale, scope, and sophistication that any one human could wield on their own. Both powerful and, relatively speaking, powerless people can benefit from this. 

But recognize that the powerful have many advantages to help them get the most magnification. Diffusion is harder. Here in Massachusetts, many of my colleagues are experimenting with using AI to help groups reach consensus, for example to agree on policy proposals. In this example, people get to use AI to help express themselves. In the United Arab Emirates, the ruler of Dubai has promised to use AI to create a “comprehensive legislative plan” spanning local and federal law and updated more frequently than traditional lawmaking. In this example, a powerful person gets to use AI to dictate the rules of a society. No matter how good the next version of ChatGPT may be, I can’t use it to do that.

Jacobsen: What happens if concentration happens across citizens and institutions?

Bruce Schneier: We already know the answer, because we’ve seen it happen: in big tech in general and with social media in particular. The powerful get more powerful, and then use that power to enact legislative changes that further protect that power.

The concentration of wealth and power is bigger than AI, of course. It’s bigger than technology. But it’s exacerbated by technologies like AI. Our task as a democracy is to ensure that technologies broadly distribute power amongst the many rather than concentrate it among the few.

We return to this problem again and again in our book. We outline myriad ways that AI today is concentrating power. We lay out a four-part plan of action: resist inappropriate uses of AI, seek our responsible uses of AI in governance, reform the ecosystem of AI, and renovate our democracy to prepare it for AI. The last one feels most urgent right now. We need to make systemic changes to our system—most of which will not be specific to AI—that are responsive to the impending risks the new technology brings.

Jacobsen: How should legislatures draft AI-era statutes?

Sanders: For Congress, a good start would be to finally pass comprehensive data privacy legislation. AI is providing many tech companies with new capabilities and incentives to abuse consumer trust and monetize surveillance of their behaviors, habits, and interests. The best AI assistants will be the ones that know the most about their users, but their operators will also pose the greatest risks to consumer privacy. Other jurisdictions, like Europe, have effectively made a wide range of exploitative business models infeasible by giving consumers rights to withhold consent, delete their data, and more.

In Rewiring Democracy, we also think about how the capabilities of AI will change the lawmaking process and law itself. Optimistically, AI can give legislators with limited resources help in drafting bills with fewer strings attached than, say, a lobbyist or an advocacy organization’s model legislation. The first law anywhere known to have been written by AI arose from Porto Alegre, Brazil, when a city councillor simply needed some help writing a bill about water utility infrastructure and turned to ChatGPT. 

In the future, legislators might choose not just to use an AI model to draft the text of a bill, but might actually choose to designate an AI model as the form of their legislation. Traditional, textual legislation suffers from ambiguity and inflexibility when it is interpreted decades later. An AI model could clarify its intent with unlimited precision and express an interpretation of a rule in response to any future special case or scenario.

Jacobsen: What might ossify law or chill innovation?

Schneier: Be careful with the phrase “chill innovation.” It’s a scary pair of words that the powerful use to shut down any talk of regulation. Do health codes chill innovation in restaurants? They don’t, and in any case maybe we don’t want restaurants whose practices make people sick.

AI is a technology that has implications throughout society. It will affect how we interact with each other. It will affect employment and the nature of labor. It will affect national security and the global economy. It will, as we talk about extensively in our book, affect democracy.

Letting a technology develop without any regulation only makes sense when the cost of getting it wrong is small. When the cost of getting it wrong will kill someone–think automobiles, or airplanes, or food service–we tend to regulate. Computers have long been in the former category; it was okay to let companies experiment unfettered because it didn’t really matter. Now, AIs are driving cars, transcribing doctor’s notes, and making life and death decisions about people’s insurance benefits.

Smart regulation doesn’t chill innovation. It actually incents innovation by defining pro-social requirements that companies have to meet. If we want AI to be unbiased, or secure, or safe enough for high-risk applications, we need to create those regulatory requirements that the market can innovate to meet.

Jacobsen: What would a “public AI” look like in practice? Things like auditing, ownership, procurement standards, and training data governance? 

Sanders: We call for the development of Public AI as an alternative to the current, corporate-dominated ecosystem of AI today. Corporate AI is built to serve one interest: short-term profit. That means it will always operate from a trust deficit, where the informed consumer knows that any AI model they use is ultimately built to serve someone else, not them. A public AI model—one built by a government agency, as a public good, under public control, with public oversight—would be subject to fundamentally different incentives. It could be optimized not to turn a profit, but to win public approval.

There are many visions for how to realize public AI. Indeed, several countries, including Singapore and Switzerland, have published fully open source (in data, code, and model weights) AI models built by their governments.  

Our preferred version of Public AI is a public option model. Think of it like the public option for healthcare. It doesn’t replace or invalidate the work of private companies to build AI models, or offer health insurance. Instead, it offers a competitive baseline: a minimum standard that private AI provider, or insurers, need to meet or exceed to be successful. We would not be looking to government to be the leader of the pack on innovation and performance. The public option could instead set a higher bar on other factors: being responsive to consumer input and feedback, engendering trust by disclosing its training data sources and procedures, and guaranteeing long term and universal access at a reasonable price.

Jacobsen: How do we harness AI for research and drafting while preventing bias and confidentiality breaches?

Schneier: Let’s start with bias. First, it’s not clear that an unbiased AI is even possible, just as an unbiased human isn’t possible. And second, there are some biases we might want: a bias for fairness, or kindness, or honesty. The flip side of a bias is a value. We all want AIs to have values, even though we are never going to agree on which values we want AIs to have.

We envision a world be populated with multiple AIs with different biases and values, and that people will choose. If you are a judge who is using AI to help write opinions, or a candidate who is using AI to help write speeches, you will choose an AI that has the same biases and values as you do–just as you would choose human assistants who mirror your biases and values. In those instances, a biased AI is a feature, at least to that user.

In situations where AI is being used as a neutral party–adjudicating a dispute, determining eligibility for a government service–we’re going to want to remove illegal biases and implant societal values. That’s technically hard, and many researchers and developers are trying to solve those problems right now.

Security is even harder. AIs are computer programs running on networked computers, and we cannot absolutely prevent confidentiality breaches. Additionally, there are all the security problems inherent in modern machine-learning AI systems. And aside from confidentiality, we’re worried about integrity: has the AI system been manipulated in any way, and can you trust its output? Right now we can’t.

We don’t know how to solve AI security with current technology. Any AI that is working in an adversarial environment—and by this I mean that it may encounter untrusted training data or input—is vulnerable to prompt injection attacks. It’s an existential problem that, near as we can tell, most people developing these technologies are just pretending isn’t there.

Jacobsen: Where is the ethical line between personalized civic education and manipulative micro-targeting in campaigns?

Sanders: This line was always blurry, for better and for worse. Generations ago, politicians went on whistlestop tours through communities, stopping long enough to speak from the back of a train car before moving on in an attempt to get face to face with as many voters as possible. Some technologies made campaigning and even less personal—radio and TV required candidates to broadcast the same message to everyone. Now, AI makes it possible to deliver individualized messages to every voter and to answer any questions they pose in personalized detail, any time of day. 

Agentic AI frameworks make it possible to abstract the voter, too, from this conversation. In the near future, my AI assistant might reach out to every candidate’s AI to ask a series of questions and then tell me how it thinks I should vote.

We generally see these kinds of assistive capabilities as positive. They increase the information bandwidth of our democratic information systems. They make it easier for me to learn about how my views line up with the options on the ballot, and make it easier for candidates to hear from thousands or millions of voters.

But there are real risks, too. If we outsource our individual decision making to an AI proxy, there is a slippery slope to outsourcing democracy itself to the machines. And if we trust a candidate’s avatar to represent their policy positions, we give candidates yet more plausible deniability for their statement and yet another vector for demagoging.

Jacobsen: Internationally, what cooperative norms will halt AI-fuelled “authoritarian tech stack” from exporting illiberalism?

Schneier: We’re not going to solve this with norms. The problem with relying on things people voluntarily agree to is that not everyone will agree to them. Right now, there’s too much profit–both economic and political–to be made from violating any norm we might suggest. It’s a race to the bottom, as both corporations and countries use AI technology for their own advantage. This is why in our book, we constantly return to either things people can do on their own, and things people can do collectively through government.

Like any technology, AI isn’t inherently good or bad. Like every technology, people can use it to good and bad ends. We can use AI to make democracy more effective and responsive to the people, or we can use it to foster authoritarianism. That’s our choice. We cannot steer how technology works–that’s a matter of science and engineering–but we can steer how we implement and deploy it. That was really our goal in Rewiring Democracy.

Jacobsen: Thank you for the opportunity and your time.

Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices.In-Sight Publishing by Scott  Douglas  Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott  Douglas  Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.

Leave a Comment

Leave a comment