New publication: The Limits of Representativeness in Citizens’ Assemblies

New article published in the inaugural issue of the Journal of Sortition. In The Limits of Representativeness in Citizens’ Assemblies: A Critical Analysis of Democratic Minipublics Paolo Spada and I explores key questions about representation in citizens’ assemblies, building on ideas from a blog post we publised two years ago. Refined through discussions with scholars and practitioners – particularly in the Deliberative Democracy Digest – it examines the challenges of representativeness and proposes constructive paths forward.

We explore ways to enhance these democratic innovations by:

  • Integrating multiple minipublics to address inclusion failures.
  • Leveraging emerging technologies, like AI-supported mediation, to scale deliberation.
  • Shifting the focus of legitimacy from unattainable claims of representativeness to fostering inclusion and preventing domination by organized minorities.

By reframing these approaches, we hope to contribute to ongoing efforts to make citizens’ assemblies more inclusive, effective, and impactful for democratic governance.

Printed copies of this inaugural issue are available free upon request here.

Unwritten 2025

In a discussion with a government official last week, she made a point that stuck with me: “Every time we discuss AI readiness,” she said, “someone tells us to wait, or to get something else done before trying it. But waiting is a decision that may cost us in the future.”

She’s right. The technology sector has mastered the art of sophisticated hand-wringing. In AI discussions, over and over again, the same cautionary refrain echoes: “We don’t know where this technology is going.” It sounds thoughtful. It feels responsible. But increasingly, I’m convinced it’s neither.

Consider how differently we approached other transformative technologies. When my colleagues and I started experimentation with mobile phones, Internet, and voice recognition over two decades ago for participatory processes, we didn’t have a crystal ball. We couldn’t have predicted cryptocurrency, TikTok, or the weaponization of social media. What we did have was a vision of the democracy we wanted to build, one where technology served citizens, not the other way around.

The results of those who have been purposefully designing technology for the public good are far from perfect, but they are revealing. While social media algorithms were amplifying political divisions in the US and Myanmar, in Taiwan technology was used for large scale consensus building. While Cambridge Analytica was mining personal data, Estonian citizens were using secure digital IDs to access public services and to conveniently vote from their homes. The difference isn’t technological sophistication – it is purpose and values.

I see the same pattern repeating with AI. In India, OpenNyAI (‘Open AI for Justice’) isn’t waiting for perfect models to explore how AI can improve access to justice. In Africa, Viamo isn’t waiting for universal internet access to leverage AI, delivering vital information to citizens through simple mobile phones without internet.

This isn’t an argument for reckless adoption – ensuring that the best guardrails available are in place must be a constant pursuit. But there’s a world of difference between thoughtful experimentation and perpetual hesitation. When we say “we don’t know where this technology is going,” we’re often abdicating our responsibility to shape its direction. It’s a comfortable excuse that mainly serves those who benefit from the status quo. That is reckless.

The future of AI isn’t a set destination we discover with time. The question isn’t whether we can predict it perfectly, but whether we’re willing to shape it at all.

Being wrong is part of the job. 

Waiting for perfect clarity is a luxury we can’t afford. But that shouldn’t mean falling prey to solutionism. This week alone, I came across one pitch promising to solve wealth inequality with blockchain-powered AI (whatever that means) and another claiming to democratize healthcare with an empathy-enhanced chatbot. Technology won’t bend the arc of history on its own – that’s still on us. 

But we can choose to stay curious, to keep questioning our assumptions, and to build technology that leaves room for human judgment, trial, and error. The future isn’t written in binary. It’s written in the messy, imperfect choices we will all make while navigating uncertainty.

Agents for the few, queues for the many – or agents for all? Closing the public services divide by regulating for AI’s opportunities.

(co-authored with Luke Jordan, originally posted on Reboot Democracy Blog)

Inequality in accessing public services is prevalent worldwide. In the UK, “priority fees” for services like passport issuance or Schengen visas allow the affluent to expedite the process. In Brazil, the middle-class hires “despachantes” – intermediaries who navigate bureaucratic hurdles on their behalf. Add technology to the mix, and you get businesses like South Africa’s WeQ4U, which help the privileged sidestep the vehicle licensing queues that others endure daily. An African exception? Hardly. In the U.S., landlords use paid online services to expedite rental property licensing, while travelers pay annual fees for faster airport security screening.

If AI development continues and public sector services fail to evolve, inequalities in access will only grow.  AI agents – capable of handling tasks like forms filling and queries – have the potential to transform access to public services. But rather than embracing this potential, the public sector risks turning a blind eye – or worse, banning these tools outright – leaving those without resources even further behind.

The result? The private sector will have to navigate the gaps, finding ways to make AI agents work with rigid public systems. Often, this will mean operating in a legal grey zone, where the agents neither confirm nor deny they are software, masquerading as applicants themselves. Accountants routinely log into government tax portals using their clients’ credentials, acting as digital proxies without any formal delegation system. If human intermediaries are already “impersonating” their clients in government systems, it’s easy to envision AI agents seamlessly stepping into this role, automatically handling documentation and responses while operating under the same informal arrangements.

The high costs of developing reliable AI agents and the legal risks of operating in regulatory grey zones will require them to earn high returns, keep these tools firmly in the hands of the wealthier – replicating the same inequalities that define access to today’s analogue services. 

For those who can afford AI agents, life will become far more convenient. Their agents will handle everything from tax filings to medical appointments and permit applications. Meanwhile, the majority will remain stuck in endless queues, their time undervalued and wasted by outdated bureaucratic processes. Both groups, however, will lose faith in the public sector: the affluent will see it as archaic, while the underserved will face worsening service as the system fails to adapt.

The question is no longer whether AI agents will transform public services. They will. The partners of Y Combinator recently advised startup founders to “find the most boring, repetitive administrative work you can and automate it”. There is little work more boring and repetitive than public service management. The real question is whether this transformation will widen the existing divide or help bridge it. 

Banning AI agents outright is a mistake. Such an approach would amount to an admission of defeat, and entrenching inequalities by design. Instead, policymakers must take bold steps to ensure equitable access to AI agents in public services. Three measures could lay the groundwork:

  1. Establish an “AI Opportunities Agency”: This agency would focus on equitable uses of AI agents to alleviate bureaucratic burdens. Its mandate would be to harness AI’s potential to improve services while reducing inequality, rather than exacerbating it. This would be the analogue of the “AI Safety Agency”, itself also a necessary body. 
  2. Develop an “Agent Power of Attorney” framework: This framework would allow users to explicitly agree that agents on an approved list could sign digitally for them for a specified list of services. Such a digital power of attorney could improve on existing forms of legal representation by being more widely accessible, and having clearer and simpler means of delegating for specific scopes.
  3. Create a competitive ecosystem for AI agents: Governments could enable an open competition in which the state provides an option but holds no monopoly. Companies that provided agents which qualified for an approved list could be compensated by a publicly paid fixed fee tied to successful completions of service applications. That would create strong incentives for companies to compete to deliver higher and higher success rates for a wider and wider audience.

A public option for such agents should also be available from the beginning. If not, capture will likely result and be very difficult to reverse later. For example, the IRS’s Direct File, launched in 2024 to provide free tax filing for lower-income taxpayers, only emerged after years of resistance from tax preparation firms that had long blocked such efforts – and it continues to face strong pushback from these same firms.

One significant risk with our approach is that the approval process for AI agents could become outdated and inefficient, resulting in a roster of poorly functioning tools – a common fate in government, where approval processes often turn into bureaucratic roadblocks that stifle innovation rather than enable it.

In such a scenario, the affluent would inevitably turn to off-list agents provided by more agile startups, while ordinary citizens would view the initiative as yet another example of government mismanaging new technology. Conversely, an overly open approval process could allow bad actors to infiltrate the system, compromising digital signatures and eroding public trust in the framework.

These risks are real, but the status quo does nothing to address them. If anything, it leaves the door wide open for unregulated, exploitative actors to flood the market with potentially harmful solutions. Bad actors are already on the horizon, and their services will emerge whether governments act or not.

However, we are not starting from scratch when it comes to regulating such systems. The experience of open banking provides valuable lessons. In many countries, it is now standard practice for a curated list of authorized companies to request and receive permission to manage users’ financial accounts. This model of governance, which balances security and innovation, could serve as a blueprint for managing digital agents in public services. After all, granting permission for an agent to apply for a driver’s license or file a tax return involves similar risks to those we’ve already learned to manage in the financial sector.

The path ahead requires careful balance. We must embrace the efficiency gains of AI agents while ensuring these gains are democratically distributed. This means moving beyond the simple dichotomy of adoption versus rejection, toward a nuanced approach that considers how these tools can serve all citizens.

The alternative – a world of agents for the few, and queues for the many – would represent not just a failure of policy, but a betrayal of the fundamental promise of public services in a democratic society.

Underestimated effects of AI on democracy, and a gloomy scenario

A few years ago, Tom Steinberg and I discussed the potential risks posed by AI bots in influencing citizen engagement processes and manipulating public consultations. With the rapid advancement of AI technology, these risks have only intensified. This escalating concern has even elicited an official response from the White House.

A recent executive order has tasked the Office of Information and Regulatory Affairs (OIRA) at the White House with considering the implementation of guidance or tools to address mass comments, computer-generated remarks, and falsely attributed comments. This directive comes in response to growing concerns about the impact of AI on the regulatory process, including the potential for generative chatbots to lead mass campaigns or flood the federal agency rule-making process with spam comments.

The threat of manipulation becomes even more pronounced when content generated by bots is viewed by policymakers as being on par with human-created content. There’s evidence to suggest that this may be already occurring in certain scenarios. For example, a recent experiment was designed to measure the impact of language models on effective communication with members of Congress. The goal was to determine if these models could divert legislative attention by generating a constant stream of unique emails directed at congressional members. Both human writers and GPT-3 were employed in the study. Emails were randomly sent to over 7,000 state representatives throughout the country, after which response rates were compared. The results showed a mere 2% difference in response rates, and for some of the policy topics studied, the response rates remained consistent.

Now, the real trouble begins when governments jump on the bot bandwagon and start using their own bots to respond, and we, the humans, are left out of the conversation entirely. It’s like being the third wheel on a digital date that we didn’t even know was happening. That’s a gloomy scenario.

The Hidden Risks of AI: How Linguistic Diversity Can Make or Break Collective Intelligence

Diversity is a key ingredient in the recipe for collective intelligence because it brings together a range of perspectives, tools, and abilities; allowing for a more comprehensive approach to problem-solving and decision-making. Gender diversity on corporate boards improves firms’ performance, ethnic diversity produces more impactful scientific research, diverse groups are better at solving crimes, popular juries are less biased than professional judges, and politically diverse editorial teams produce higher-quality Wikipedia articles.

Large language models, like those powering AI systems, rely heavily on datasets or corpora, with a significant part of it based on English content. This dominance is consequential. Just as diverse groups of people yield richer outcomes, an AI trained on diverse linguistic data offers a broader perspective. Each language encapsulates unique thoughts, metaphors, and wisdom. Without diverse linguistic representation, we risk fostering AI systems with limited collective intelligence. The quality, diversity, and quantity of the data they are trained on directly influence their epistemic outputs. Unsurprisingly, large language models struggle to capture long-tail knowledge.

This comes with two major — at least hypothetically — risks: 1) systems that do not fully leverage the knowledge dispersed in the population, 2) the benefits of AI may be more accessible to some groups over others; for instance, speakers of less-dominant languages might not equally benefit from AI’s advancements. It’s not merely about translation; it’s the nuances and knowledge embedded in languages that might be overlooked.

There are also two additional dimensions that could reinforce biases in AI systems: 1) as future models are trained on content that might have been generated by AI, there may be a reinforcing effect where biases present in the initial training data are amplified over time; and 2) techniques such as guided transfer learning may also increase biases if the source model used in transfer learning is trained on biased data.

This introduces a nuanced dimension to the digital divide. Historically, the digital divide was characterized by access to technology, internet connectivity, digital skills, and the socio-economic variables shaping these factors. Yet, with AI, our understanding of what constitutes digital divide should expand. It’s a subtler yet crucial divide that policymakers and development practitioners might not yet fully recognize.

Voices in the Code: Citizen Participation for Better Algorithms

Image by mohamed Hassan from Pixabay

Voices in the Code, by David G. Robinson, is finally out. I had the opportunity to read the book prior to its publication, and I could not recommend it enough. David shows how, between 2004 and 2014 in the US, experts and citizens came together to build a new kidney transplant matching algorithm. David’s work is a breath of fresh air for the debate surrounding the impact of algorithms on individuals and societies – a debate typically focused on the negative and sometimes disastrous effects of algorithms. While David conveys these risks at the outset of the book, focusing solely on these threats would add little to a public discourse already saturated with concerns. 

One of the major missing pieces in the “algorithmic literature” is precisely how citizens, experts and decision-makers can make their interactions more successful, working towards algorithmic solutions that better serve societal goals. The book offers a detailed and compelling case where a long and participatory process leads to the crafting of an algorithm that delivers a public good. This, despite the technical complexities, moral dilemmas, and difficult trade-offs involved in decisions related to the allocation of kidneys to transplant patients. Such a feat would not be achieved without another contribution of the book, which is to offer a didactical demystification of what algorithms are, normally treated as a reserved domain of few experts.

As David conducts his analysis, one also finds an interesting reversal of the assumed relationship between technology and participatory democracy. This relationship has mostly been examined from a civic tech angle, focusing on how technologies can support democratic participation through practices such as e-petitions, online citizens’ assemblies, and digital participatory budgeting. Thus, another original contribution of this book is to look at this relationship from the opposite angle: how can participatory processes better support technological deployments. While technology for participation (civic tech) remains an important topic, we should probably start paying more attention to how participation can support technological solutions (civic for tech).           

Continuing on through the book, other interesting insights emerge. For instance, technology and participatory democracy pundits normally subscribe to the virtues of decentralized systems, both from a technological and institutional perspective. Yet David depicts precisely the virtues of a decision-making system centralized at the national level. Should organ transplant issues be decided at the local level in the US, the results would probably not be as successful. Against intuition, David presents a clear case where centralized (although participatory) systems might offer better collective outcomes. Surfacing this counterintuitive finding is a welcome contribution to debates on the trade-offs between centralization and decentralization, both from a technological and institutional standpoint. 

But a few paragraphs here cannot do the book justice. Voices in the Code is certainly a must-read for anybody working on issues ranging from institutional design and participatory democracy, all the way to algorithmic accountability and decision support systems.

***

P.s. As an intro to the book, here’s a nice 10 min. conversation with David on the Marketplace podcast.