We explore ways to enhance these democratic innovations by:
Integrating multiple minipublics to address inclusion failures.
Leveraging emerging technologies, like AI-supported mediation, to scale deliberation.
Shifting the focus of legitimacy from unattainable claims of representativeness to fostering inclusion and preventing domination by organized minorities.
By reframing these approaches, we hope to contribute to ongoing efforts to make citizens’ assemblies more inclusive, effective, and impactful for democratic governance.
Printed copies of this inaugural issue are available free upon request here.
Inequality in accessing public services is prevalent worldwide. In the UK, “priority fees” for services like passport issuance or Schengen visas allow the affluent to expedite the process. In Brazil, the middle-class hires “despachantes” – intermediaries who navigate bureaucratic hurdles on their behalf. Add technology to the mix, and you get businesses like South Africa’s WeQ4U, which help the privileged sidestep the vehicle licensing queues that others endure daily. An African exception? Hardly. In the U.S., landlords use paid online services to expedite rental property licensing, while travelers pay annual fees for faster airport security screening.
If AI development continues and public sector services fail to evolve, inequalities in access will only grow. AI agents – capable of handling tasks like forms filling and queries – have the potential to transform access to public services. But rather than embracing this potential, the public sector risks turning a blind eye – or worse, banning these tools outright – leaving those without resources even further behind.
The result? The private sector will have to navigate the gaps, finding ways to make AI agents work with rigid public systems. Often, this will mean operating in a legal grey zone, where the agents neither confirm nor deny they are software, masquerading as applicants themselves. Accountants routinely log into government tax portals using their clients’ credentials, acting as digital proxies without any formal delegation system. If human intermediaries are already “impersonating” their clients in government systems, it’s easy to envision AI agents seamlessly stepping into this role, automatically handling documentation and responses while operating under the same informal arrangements.
The high costs of developing reliable AI agents and the legal risks of operating in regulatory grey zones will require them to earn high returns, keep these tools firmly in the hands of the wealthier – replicating the same inequalities that define access to today’s analogue services.
For those who can afford AI agents, life will become far more convenient. Their agents will handle everything from tax filings to medical appointments and permit applications. Meanwhile, the majority will remain stuck in endless queues, their time undervalued and wasted by outdated bureaucratic processes. Both groups, however, will lose faith in the public sector: the affluent will see it as archaic, while the underserved will face worsening service as the system fails to adapt.
The question is no longer whether AI agents will transform public services. They will. The partners of Y Combinator recently advised startup founders to “find the most boring, repetitive administrative work you can and automate it”. There is little work more boring and repetitive than public service management. The real question is whether this transformation will widen the existing divide or help bridge it.
Banning AI agents outright is a mistake. Such an approach would amount to an admission of defeat, and entrenching inequalities by design. Instead, policymakers must take bold steps to ensure equitable access to AI agents in public services. Three measures could lay the groundwork:
Establish an “AI Opportunities Agency”: This agency would focus on equitable uses of AI agents to alleviate bureaucratic burdens. Its mandate would be to harness AI’s potential to improve services while reducing inequality, rather than exacerbating it. This would be the analogue of the “AI Safety Agency”, itself also a necessary body.
Develop an “Agent Power of Attorney” framework: This framework would allow users to explicitly agree that agents on an approved list could sign digitally for them for a specified list of services. Such a digital power of attorney could improve on existing forms of legal representation by being more widely accessible, and having clearer and simpler means of delegating for specific scopes.
Create a competitive ecosystem for AI agents: Governments could enable an open competition in which the state provides an option but holds no monopoly. Companies that provided agents which qualified for an approved list could be compensated by a publicly paid fixed fee tied to successful completions of service applications. That would create strong incentives for companies to compete to deliver higher and higher success rates for a wider and wider audience.
A public option for such agents should also be available from the beginning. If not, capture will likely result and be very difficult to reverse later. For example, the IRS’s Direct File, launched in 2024 to provide free tax filing for lower-income taxpayers, only emerged after years of resistance from tax preparation firms that had long blocked such efforts – and it continues to face strong pushback from these same firms.
One significant risk with our approach is that the approval process for AI agents could become outdated and inefficient, resulting in a roster of poorly functioning tools – a common fate in government, where approval processes often turn into bureaucratic roadblocks that stifle innovation rather than enable it.
In such a scenario, the affluent would inevitably turn to off-list agents provided by more agile startups, while ordinary citizens would view the initiative as yet another example of government mismanaging new technology. Conversely, an overly open approval process could allow bad actors to infiltrate the system, compromising digital signatures and eroding public trust in the framework.
These risks are real, but the status quo does nothing to address them. If anything, it leaves the door wide open for unregulated, exploitative actors to flood the market with potentially harmful solutions. Bad actors are already on the horizon, and their services will emerge whether governments act or not.
However, we are not starting from scratch when it comes to regulating such systems. The experience of open banking provides valuable lessons. In many countries, it is now standard practice for a curated list of authorized companies to request and receive permission to manage users’ financial accounts. This model of governance, which balances security and innovation, could serve as a blueprint for managing digital agents in public services. After all, granting permission for an agent to apply for a driver’s license or file a tax return involves similar risks to those we’ve already learned to manage in the financial sector.
The path ahead requires careful balance. We must embrace the efficiency gains of AI agents while ensuring these gains are democratically distributed. This means moving beyond the simple dichotomy of adoption versus rejection, toward a nuanced approach that considers how these tools can serve all citizens.
The alternative – a world of agents for the few, and queues for the many – would represent not just a failure of policy, but a betrayal of the fundamental promise of public services in a democratic society.
For proponents of deliberative democracy, the last couple of years could not have been better. Propelled by the recent diffusion of citizens’ assemblies, deliberative democracy has definitely gained popularity beyond small circles of scholars and advocates. From CNN to the New York Times, the Hindustan Times (India), Folha de São Paulo (Brazil), and Expresso (Portugal), it is now almost difficult to keep up with all the interest in democratic models that promote the random selection of participants who engage in informed deliberation. A new “deliberative wave” is definitely here.
But with popularity comes scrutiny. And whether the deliberative wave will power new energy or crash onto the beach, is an open question. As is the case with any democratic innovation (institutions designed to improve or deepen our existing democratic systems), critically examining assumptions is what allows for management of expectations and, most importantly, gradual improvements.
Proponents of citizens’ assemblies put representativeness at the core of their definition. In fact, it is one of their main selling points. For example, a comprehensive report highlights that an advantage of citizens’ assemblies, compared to other mechanisms of participatory democracy, is their typical combination of random selection and stratification to form a public body that is “representative of the public.” This general argument resonates with the media and the wider public. A recent illustration is an article by The Guardian, which depicts citizens’ assemblies as “a group of people who are randomly selected and reflect the demographics of the population as a whole”
It should be noted that claims of representativeness vary in their assertiveness. For instance, some may refer to citizens’ assemblies as “representative deliberative democracy,” while others may use more cautious language, referring to assemblies’ participants as being “broadly representative” of the population (e.g. by gender, age, education, attitudes). This variation in terms used to describe representativeness should prompt an attentive observer to ask basic questions such as: “Are existing practices of deliberative democracy representative?” “If they are ‘broadly’ representative, how representative are they?” “What criteria, if any, are used to assess whether a deliberative democracy practice is more or less representative of the population?” “Can their representativeness be improved, and if so, how?” These are basic questions that, surprisingly, have been given little attention in recent debates surrounding deliberative democracy. The purpose of this article is to bring attention to these basic questions and to provide initial answers and potential avenues for future research and practice.
Citizens Assemblies and three challenges of random sampling
Before discussing the subject of representativeness, it is important to provide some conceptual clarity. From an academic perspective, citizens’ assemblies are a variant of what political scientists normally refer to as “mini-publics.” These are processes in which participants: 1) are randomly selected (often combined with some form of stratification), 2) participate in informed deliberation on a specific topic, and 3) reach a public judgment and provide recommendations on that topic. Thus, in this text, mini-publics serves as a general term for a variety of practices such as consensus conferences, citizens’ juries, planning cells, and citizens’ assemblies themselves.
In this discussion, we will focus on what we consider to be the three main challenges of random sampling. First, we will examine the issue of sample size and the limitations of stratification in addressing this challenge. Second, we will focus on sampling error, which is the error that occurs when observing a sample rather than the entire population. Third, we will examine the issue of non-response, and how the typically small sample size of citizens’ assemblies exacerbates this problem. We conclude by offering alternatives to approach the trade-offs associated with mini-publics’ representativeness dilemma.
Minimal sample size, and why stratification does not help reducing sample size requirements in complex populations
Most mini-publics that we know of have a sample size of around 70 participants or less, with a few cases having more than 200 participants. However, even with a sample size of 200 people, representing a population accurately is quite difficult. This may be the reason why political scientist Robert Dahl, who first proposed the use of mini-publics over three decades ago, suggested a sample size of 1000 participants. This is also the reason why most surveys that attempt to represent a complex national population have a sample size of over 1000 people.
To understand why representing a population accurately is difficult, consider that a sample size of approximately 370 individuals is enough to estimate a parameter of a population of 20,000 with a 5% error margin and 95% confidence level (for example, estimating the proportion of the population that answers “yes” to a question). However, if the desired error margin is reduced to 2%, the sample size increases to over 2,000, and for a more realistic population of over 1 million, a sample size of over 16,000 is required to achieve a 1% error margin with 99% confidence. Although the size of the sample required to estimate simple parameters in surveys does not increase significantly with the size of the population, it still increases beyond the sample sizes currently used in most mini-publics. Sample size calculators are available online to demonstrate these examples without requiring any statistical knowledge.
Stratification is a strategy that can help reduce the error margin and achieve better precision with a fixed sample size. However, stratification alone cannot justify the very small sample sizes that are currently used in most mini-publics (70 or less).
To understand why, let’s consider that we want to create a sample that represents the five important strata of the population and includes all their intersections, such as ethnicity, age, income, geographical location, and gender. For simplicity, let’s assume that the first four categories have five equal groups in society, and gender is composed of two equal groups. The minimal sample required to include the intersections of all the strata and represent this population is equal to 5^4×2=1250. Note that we have maintained the somewhat unlikely assumption that all categories have equal size. If one stratum, such as ethnicity, includes a minority that is 1/10 of the population, then our multiplier would be 10 instead of 5, requiring a sample size of 5^3x10x2=2500.
The latter is independent of the number of categories within the strata, so even if the strata have only two categories, one comprising 90% (9/10) of the population and one comprising 10% (1/10) of the population, the multiplier would still be 10. When we want to represent a minority of 1% (1/100) of the population, the multiplier becomes 100. Note that this minimal sample size would include the intersection of all the strata in such a population, but such a small sample will not be representative of each stratum. To achieve stratum-level representation, we need to increase the number of people for each stratum following the same mathematical rules we used for simple sampling, as described at the beginning of this section, generating a required sample size in the order of hundreds of thousand of people (in our example above 370×2500=925000).
This is without even entering into the discussion of what should be the ideal set of strata to be used in order to achieve legitimacy. Should we also include attitudes such as liberal vs conservative? Opinions on the topic of the assembly? Metrics of type of personality? Education? Income? Previous level of engagement in politics? In sum, the more complex the population is, the larger the sample required to represent it.
Sampling error due to a lack of a clear population list
When evaluating sampling methods, it is important to consider that creating a random sample of a population requires a starting population to draw from. In some fields, the total population is well-defined and data is readily available (e.g. students in a school, members of parliament), but in other cases such as a city or country, it becomes more complicated.
The literature on surveys contains multiple publications on sampling issues, but for our purposes, it is sufficient to note that without a police state or similar means of collecting an unprecedented amount of information on citizens, creating a complete list of people in a country to draw our sample from is impossible. All existing lists (e.g. electoral lists, telephone lists, addresses, social security numbers) are incomplete and biased.
This is why survey companies charge significant amounts of money to allow customers to use their model of the population, which is a combination of multiple subsamples that have been optimized over time to answer specific questions. For example, a survey company that specializes in election forecasting will have a sampling model optimized to minimize errors in estimating parameters of the population that might be relevant for electoral studies, while a company that specializes in retail marketing will have a model optimized to minimize forecasting errors in predicting sales of different types of goods. Each model will draw from different samples, applying different weights according to complex algorithms that are optimized against past performance. However, each model will still be an imperfect representation of the population.
Therefore, even the best possible sampling method will have an inherent error. It is difficult, if not impossible, to perfectly capture the entire population, so our samples will be drawn from a subpopulation that carries biases. This problem is further accentuated for low-cost mini-publics that cannot afford expensive survey companies or do not have access to large public lists like electoral or census lists. These mini-publics may have a very narrow and biased initial subpopulation, such as only targeting members of an online community, which brings its own set of biases.
Non-response
A third factor, well-known among practitioners and community organizers, is the fact that receiving an invitation to participate does not mean a person will take part in the process. Thus, any invitation procedure has issues of non-participation. This is probably the most obvious factor that prevents one from creating representative samples of the population. In mini-publics with large samples of participants, such as Citizens’ Assemblies, the conversion rate is often quite low, sometimes less than 10%. By conversion rate, we mean the percentage of the people contacted that say that they are willing to participate and enter the recruitment pool. Simpler mini-publics of shorter duration (e.g. one weekend) often achieve higher engagement. A dataset on conversion rates of mini-publics does not exist, but our own experience in organizing Citizens Assemblies, Deliberative Polls, and clones tell us that it is possible to achieve more than 20% conversion when the topic is very controversial. For example, in the UK’s Citizens’ Assembly on Brexit in 2017, 1,155 people agreed to enter the recruitment pool out of the 5,000 contacted, generating a conversion rate of 23.1%, as illustrated below.[1]
Figure 1: Contact and recruitment numbers UK’s Citizens Assembly on Brexit (Renwick et al. 2017)
We do not pretend to know all the existing cases, and so this data should be taken with caution. Maybe there have been cases with 80% conversion, given it is possible to achieve such rates in surveys. But even in such hypothetical best practices, we would have failed to engage 20% of the population. More realistically, with 10 to 30% engagement, we are just engaging a very narrow subset of the population.
Frequent asked questions, and why we should not abandon sortition
It is clear from the points above that the assertion that the current generation of relatively small mini-publics is representative of the population from which it is drawn is questionable. Not surprisingly, the fact that participants of mini-publics differ from the population they are supposed to represent has already been documented over a decade ago.[2] However, in our experience, when confronted with these facts, practitioners and advocates of mini-publics often raise various questions. Below, we address five frequently asked questions and provide answers for them.
“But people use random sampling for surveys and then claim that the results are representative, what is the difference for mini-publics?”
The first difference we already discussed between surveys and mini-publics is that surveys that aim to represent a large population use larger samples.
The second difference, less obvious, is that a mini-public is not a system that aggregates fixed opinions. Rather, one of the core principles of mini-publics is that participants deliberate and their opinions may change as a result of the group process and composition. Our sampling procedures, however, are based on the task of estimating population parameters, not generating input for legitimate decision making. While a 5% error margin with 95% confidence level may be acceptable in a survey investigating the proportion of people who prefer one policy over another, this same measure cannot be applied to a mini-public because participants may change their opinions through the deliberation process. A mini-public is not an estimate derived from a simple mathematical formula, but rather a complex process of group deliberation that may transform input preferences into output preferences and potentially lead to important decisions. Christina Lafont has used a similar argument to criticize even an ideal sample that achieves perfect input representativeness.[3]
“But we use random assignment for experiments and then claim that the results are representative, what is the difference for mini-publics?”
Mini-publics can be thought of as experiments, similar to clinical trials testing the impact of a vaccine. This approach allows us to evaluate the impact of a mini-public on a subset of the population, providing insight into what would happen if a similar subset of the population were to deliberate. Continuing this metaphor, if the mini-public participants co-design a new policy solution and support its implementation, any similar subsets of the population going through an identical mini-public process should generate a similar output.
However, clinical trials require that the vaccine and a placebo be randomly assigned to treatment and control groups. This approach is only valid if the participants are drawn from a representative sample and cannot self-select into each experimental arm.
Unfortunately, few mini-publics compare the decisions made by members to those who were not selected, and this is not considered a key element for claiming representativeness or legitimacy. Furthermore, while random assignment of treatment and control is crucial for internal validity, it does not guarantee external validity. That is, the results may not be representative of the larger population, and the estimate of the treatment effect only applies to the specific sample used in the experiment.
While the metaphor of the experiment as a model to interpret mini-publics is preferable to the metaphor of the survey, it does not solve the issue of working with non-representative samples in practice. Therefore, we must continue to explore ways to improve the representativeness of mini-publics and take into account the limitations of the experimental metaphor when designing and interpreting their results.
“Ok, mini-publics may not be perfect, but are they not clearly better than other mechanisms?”
Thus far, we have provided evidence that the claim of mini-publics as representative of the population is problematic. But what about more cautious claims, such as mini-publics being more inclusive than other participatory processes (e.g., participatory budgeting, e-petitions) that do not employ randomization? Many would agree that traditional forms of consultation tend to attract “usual suspects” – citizens who have a higher interest in politics, more spare time, higher education, enjoy talking in public, and sometimes enjoy any opportunity to criticize. In the US, for instance, these citizens are often older white males, or as put by a practitioner once, “the male, pale and stale.” A typical mini-public instead manages to engage a more diverse set of participants than traditional consultations. While this is an obvious reality, the engagement strategies of mini-publics compared to traditional consultations based on self-selection have very different levels of sophistication and costs. Mini-publics tend to invest more resources in engagement, sometimes tens of thousands of dollars, and thus we cannot exclude that existing results in terms of inclusion are purely due to better outreach techniques, such as mass recruitment campaigns and stipends for the participants.
Therefore, it is not fair to compare traditional consultations to mini-publics. As it is not fair to compare mini-publics that are not specifically designed to include marginalized populations to open-to-all processes that are specifically designed for this purpose. The classic critique of feminist, intersectional and social movement scholars that mini-publics design does not consider existing inequalities, and thus is inferior to dedicated processes of minority engagement is valid in that case. This is because the amount dedicated to engagement is positively correlated with inclusion. For instance, processes specifically designed for immigrants and native populations will have more inclusive results than a general random selection strategy that does not have specific quotas for these groups and engagement strategies for them.
We talk past one another when we try to rank processes with respect to their supposed inclusion performance without considering the impact of the resources dedicated to engagement or their intended effects (e.g. redistribution, collective action).
It is also difficult to determine which approach is more inclusive without a significant amount of research comparing different participatory methods with similar outreach and resources. As far as we know, the only study that compares two similar processes – one using random engagement and the other using an open-to-all invitation – found little difference in inclusiveness.[4] It also highlighted the importance of other factors such as the design of the process, potential political impact, and the topic of discussion. Many practitioners do not take these factors into account, and instead focus solely on recruitment strategies. While one study is not enough to make a conclusive judgment, it does suggest that the assumption that mini-publics using randomly selected participants are automatically more inclusive than open-to-all processes is problematic.
“But what about the ergonomics of the process and deliberative quality? Small mini-publics are undeniably superior to large open-to-all meetings.”
One of the frequently advertised advantages of small mini-publics is their capacity to support high-quality deliberation and include all members of the sample in the discussion. This is a very clear advantage; however, it has nothing to do with random sampling. It is not difficult to imagine a system in which an open-to-all meeting is called and then such a meeting selects a smaller number of representatives that will proceed to discuss using high-quality deliberative procedures. The selection rule could include quotas so that the selected members respect criteria of diversity of interest (even though, as we argued before, that would not be representative of the entire group). The ergonomics and inclusion advantages are purely linked with the size of the assembly and the process used to support deliberation.
“So, are you saying we should abandon sortition?”
We hope that it is now clearer why we contend that it is conceptually erroneous to defend the application of sortition in mini-publics based on their statistical representation of the population. So, should sortition be abandoned? Our position is that it should not, and for one less obvious and counterintuitive argument in favor of random sampling: it offers a fair way to exclude certain groups from the mini-public. This is particularly so because, in certain cases, participatory mechanisms based on self-selection may be captured by organized minorities to the detriment of disengaged majorities.
Consider, for instance, one of President Obama’s first attempts to engage citizens at large-scale, the White House’s online town-hall. Through a platform named “open for questions,” citizens were able to submit questions to Obama and vote for which questions they would like to be answered by him. Over 92,000 people posted questions, and about 3.6 million votes were cast for and against those questions. Under the section “budget” of the questions, seven of the ten most popular queries were about legalizing marijuana, many of which were about taxing it. The popularity of this issue was attributed to a campaign led by NORML, an organization advocating for pot legalization. While the cause and ideas may be laudable, it is fair to assume that this was hardly the biggest budgetary concern of Americans in the aftermath of an economic downturn.
(Picture by Pete Souza, Wikimedia Commons)
In a case like the White House’s town-hall, the randomization of people to participate would be a fair and effective way to avoid the capture of the dialogue by organized groups. Randomization does not completely exclude the possibility of capture of a deliberative space, but it does increase the costs of doing so. The probability that members of an organized minority are randomly sampled to participate in a mini-public is minor, therefore the odds of their presence in the mini-public will be minor. Thus, even if we had a technological solution capable of organizing large-scale deliberation in the millions, a randomization strategy could still be an effective means to protect deliberation from the capture by organized minorities. A legitimate method of exclusion will remain an asset – at least until we have another legitimate way to mitigate the ability of small, organized minorities to bias deliberation.
The way forward for mini-publics: go big or go home?
There is clearly a case for increasing the size of mini-publics to improve their ability to represent the population. But there is also a trade-off between the size of the assembly and the cost required to sustain high-quality deliberation. With sizes approaching 1000 people, hundreds of moderators will be required and much of the exchange of information will occur not through synchronous exchanges in small groups, but through asynchronous transmission mechanisms across the groups. This is not necessarily a bad thing, but it will have the typical limitations of any type of aggregation mechanism that requires participant attention and effort. For example, in an ideation process with 100 groups of 10 people each, where each group proposes one idea and then discusses all other ideas, each group would have to discuss 100 ideas. This is a very intense task. However, there could be filtering mechanisms that require subgroups to eliminate non-interesting ideas, and other solutions designed to reduce the amount of effort required by participants.
All else being equal, as the size of the assembly grows, the logistical complexity and associated costs increases. At the same time, the ability to analyze and integrate all the information generated by participants diminishes. The question of whether established technologies like argument mapping, or even emerging artificial intelligence could help overcome the challenges associated with mass deliberation is an empirical one – but it’s certainly an avenue worth exploring through experiments and research. Recent designs of permanent mini-publics such as the one adopted in Belgium (Ostbelgien, Brussels) and Italy (Milan) that resample a small new group of participants every year could attempt to include over time a sufficiently large sample of the population to achieve a good level of representation, at least for some strata of the population, and as long as systematic sampling errors are corrected, and obvious caveats in terms of representativeness are clearly communicated.
Another approach is to abandon the idea of achieving representativeness and instead target specific problems of inclusion. This is a small change in the current approach to mini-publics, but in our opinion, it will generate significant returns in terms of long-term legitimacy. Instead of justifying a mini-public through a blanket claim of representation, the justification in this model would emerge from a specific failure in inclusion. For example, imagine that neighborhood-level urban planning meetings in a city consistently fail to involve renters and disproportionately engage developers and business owners. In such a scenario, a stratified random sample approach that reserves quotas for renters and includes specific incentives to attract them, and not the other types of participants, would be a fair strategy to prevent domination. However, note that this approach is only feasible after a clear inclusion failure has been detected.
In conclusion, from a democratic innovations’ perspective, there seems to be two productive directions for mini-publics: increasing their size or focusing on addressing failures of inclusiveness. Expanding the size of assemblies involves technical challenges and increased costs, but in certain cases it might be worth the effort. Addressing specific cases of exclusion, such as domination by organized minorities, may be a more practical and scalable approach. This second approach might not seem very appealing at first. But one should not be discouraged by our unglamorous example of fixing urban planning meetings. In fact, this approach is particularly attractive given that inclusion failures can be found across multiple spaces meant to be democratic – from neighborhood meetings to parliaments around the globe.
For mini-public practitioners and advocates like ourselves, this should come as a comfort: there’s no shortage of work to be done. But we might be more successful if, in the meantime, we shift the focus away from the representativeness claim.
****************
We would like to express our gratitude to Amy Chamberlain, Andrea Felicetti, Luke Jordan, Jon Mellon, Martina Patone, Thamy Pogrebinschi, Hollie Russon Gilman, Tom Steinberg, and Anthony Zacharewski for their valuable feedback on previous versions of this post.
[1] Renwick, A., Allan, S., Jennings, W., McKee, R., Russell, M. and Smith, G., 2017. A Considered Public Voice on Brexit: The Report of the Citizens’ Assembly on Brexit.
[2] Goidel, R., Freeman, C., Procopio, S., & Zewe, C. (2008). Who participates in the ‘public square’ and does it matter? Public Opinion Quarterly, 72, 792- 803. doi: 10.1093/poq/nfn043
[3] Lafont, C., 2015. Deliberation, participation, and democratic legitimacy: Should deliberative mini‐publics shape public policy?. Journal of political philosophy, 23(1), pp.40-63.
[4] Griffin J. & Abdel-Monem T. & Tomkins A. & Richardson A. & Jorgensen S., (2015) “Understanding Participant Representativeness in Deliberative Events: A Case Study Comparing Probability and Non-Probability Recruitment Strategies”, Journal of Public Deliberation 11(1). doi: https://doi.org/10.16997/jdd.221
Open government’s uncertain effects and the Biden opportunity: what now?
A review of 10 years of open government research reveals: 1) “a transparency-driven focus”, 2) “methodological concerns”, and 3) [maybe not surprising] “the lack of empirical evidence regarding the effects of open government”. My take on this is that these findings are, somewhat, self-reinforcing.
First, the early focus on transparency by open government advocates, while ignoring the conditions under which transparency could lead to public goods, should be, in part, to blame. This is even more so if open government interventions insist on tactical, instead of strategicapproaches to accountability. Second, the fact that many of those engaging in open government efforts do not take into account the existing evidence doesn’t help in terms of designing appropriate reforms, nor in terms of calibrating expectations. Proof of this is the recurrent and mostly unsubstantiated spiel that “transparency leads to trust”, voiced by individuals and organizations who should have known better. Third, should there be any effects of open government reforms, these are hard to verify in a credible manner given that evaluations often suffer from methodological weaknesses, as indicated by the paper.
Finally, open government’s semantic extravaganza makes building critical mass all the more difficult. For example, I have my doubts over whether the paper would reach similar conclusions should it have expanded the review to open government practices that, in the literature, are not normally labeled as open government. This would be the case, for instance, of participatory budgeting (which has shown to improve service delivery and increase tax revenues), or strategic approaches to social accountability that present substantial results in terms of development outcomes.
In any case, the research findings are still troubling. The election of President Biden gives some extra oxygen to the open government agenda, and that is great news. But in a context where autocratization turns viral, making a dent in how governments operate will take less policy-based evidence searching and more evidence-based strategizing. That involves leveraging the existing evidence when it is available, and when it is not, the standard path applies: more research is needed.
Open Government Partnership and Justice
On another note, Joe Foti, from the Open Government Partnership (OGP), writes on the need to engage more lawyers, judges and advocates in order to increase the number of accountability-focused OGP commitments. I particularly like Joe’s ideas on bringing these actors together to identify where OGP commitments could be stronger, and how. This resonates with a number of cases I’ve come across in the past where the judiciary played a key role in ensuring that citizens’ voice also had teeth.
I also share Joe’s enthusiasm for the potential of a new generation of commitments that put forward initiatives such as specialized anti-corruption courts and anti-SLAPP provisions. Having said this, the judiciary itself needs to be open, independent and capable. In most countries that I’ve worked in, a good part of open government reforms fail precisely because of a dysfunctional judiciary system.
Diversity, collective intelligence and deliberative democracy
Part of the justification for models of deliberative democracy is their epistemic quality, that is, large and diverse crowds are smarter than the (elected or selected) few. A good part of this argument finds its empirical basis in the fantastic work by Scott Page.
But that’s not all. We know, for instance, that gender diversity on corporate boards improves firms’ performance, ethnic diversity produces more impactful scientific research, diverse groups are better at solving crimes, popular juries are less biased than professional judges, and politically diverse editorial teams produce higher-quality Wikipedia articles. Diversity also helps to explain classical Athens’ striking superiority vis-à-vis other city-states of its time, due to the capacity of its democratic system to leverage the dispersed knowledge of its citizens through sortition.
Now, a Nature article, “Algorithmic and human prediction of success in human collaboration from visual features”, presents new evidence of the power of diversity in problem-solving tasks. In the paper, the authors examine the patterns of group success in Escape The Room, an adventure game in which a group attempts to escape a maze by collectively solving a series of puzzles. The authors find that groups that are larger, older and more gender diverse are significantly more likely to escape. But there’s an exception to that: more age diverse groups are less likely to escape. Intriguing isn’t it?
Deliberative processes online: rough review of the evidence
As the pandemic pushes more deliberative exercises online, researchers and practitioners start to take more seriously the question of how effective online deliberation can be when compared to in-person processes. Surprisingly, there are very few empirical studies comparing the two methods.
But a quick run through the literature offers some interesting insights. For instance, an online 2004 deliberative poll on U.S. foreign policy, and a traditional face-to-face deliberative poll conducted in parallel, presented remarkably similar results. A 2007 experiment comparing online and face-to-face deliberation found that both approaches can increase participants’ issue knowledge, political efficacy, and willingness to participate in politics. A similar comparison from 2009 looking at deliberation over the construction of a power plant in Finland found considerable resemblance in the outcomes of online and face-to-face processes. A study published in 2012 on waste treatment in France found that, compared to the offline process, online deliberation was more likely to: i) increase women’s interventions, ii) promote the justification or arguments, and iii) be oriented towards the common good (although in this case the processes were not similar in design).
The external validity of these findings, however encouraging they may be, remains an empirical question. Particularly given that since these studies were conducted the technology used to support deliberations has in many cases changed (e.g. from written to “zoomified” deliberations). Anyhow, kudos should go to the researchers who started engaging with the subject well over a decade ago: if that work was a niche subject then, their importance now is blatantly obvious.
(BTW, on a related issue, here’s a fascinating 2021 experiment examining whether online juries can make consistent, repeatable decisions: interestingly, deliberating groups are much more consistent than non-deliberating groups)
Fixing the Internet?
Anne Applebaum and Peter Pomerantsev published a great article in The Atlantic on the challenges to democracy by an Internet model that fuels disinformation and polarization, presenting alternative paths to address this. I was thankful for the opportunity to make a modest contribution to such a nice piece.
Of course, that doesn’t mean we shouldn’t be concerned about the effects of the Internet in politics. For instance, a new study in the American Political Science Review finds that radical right parties benefit more than any other parties from malicious bots on social media.
Open democracy
2021 continues to be a good year for the proponents of deliberative democracy, with growing coverage of the subject in the mainstream media, in part fueled by the recent launch of Helène Landemore’s great book “Open Democracy.” Looking for something to listen to? Look no further and listen to this interview by Ezra Klein with Helène.
A dialogue among giants
The recording of the roundtable Contours of Participatory Democracy in the 21st Century is now available. The conversation between Jane Mansbridge, Mark Warren and Cristina Lafont can be found here.
Democracy and design thinking
Speaking of giants, the new book by Michael Saward “Democratic Design”, is finally out. I’m a big fan of Michael’s work, so my recommendation may be biased. In this new book Michael brings design thinking together with democratic theory and practice. If the design of democratic institutions is one of your topics, you should definitely check it out!
Civic Tech
I was thrilled to have the opportunity to deliver a lecture at the Center for Collective Learning – Artificial and Natural Intelligence Institute. My presentation, Civic Technologies: Past, Present and Future, can be found here.
Scholar articles:
And finally, for those who really want to geek-out, a list of 15 academic articles I enjoyed reading:
Modern Grantmaking: That’s the title of a new book by Gemma Bull and Tom Steinberg. I had the privilege of reading snippets of this, and I can already recommend it not only to those working with grantmaking, but also pretty much anyone working in the international development space.
Lectures: The Center for Collective Learning has a fantastic line-up of lectures open to the public. Find out more here.
Learning from Togo: While unemployment benefits websites were crashing in the US, the Togolese government showed how to leverage mobile money and satellite data to effectively get cash into the hands of those who need it the most
Nudging the nudgers: British MPs are criticising academics for sending them fictitious emails for research. I wonder if part of their outrage is not just about the emails, but about what the study could reveal in terms of their actual responsiveness to different constituencies.
DataViz: Bringing data visualization to physical/offline spaces has been an obsession of mine for quite a while. I was happy to come across this project while doing some research for a presentation
I’ve recently been exchanging with some friends on a list of favorite reads from 2020. While I started with a short list, it quickly grew: after all, despite the pandemic, there has been lots of interesting stuff published in the areas that I care about throughout the year. While the final list of reads varies in terms of subjects, breadth, depth and methodological rigor, I picked these 46 for different reasons. These include my personal judgement of their contribution to the field of democracy, or simply a belief that some of these texts deserve more attention than they currently receive. Others are in the list because I find them particularly surprising or amusing.
As the list is long – and probably at this length, unhelpful to my friends – I tried to divide it into three categories: i) participatory and deliberative democracy, ii) civic tech and digital democracy, and iii) and miscellaneous (which is not really a category, let alone a very helpful one, I know). In any case, many of the titles are indicative of what the text is about, which should make it easier to navigate through the list.
These caveats aside, below is the list of some of my favorite books and articles published in 2020:
Participatory and Deliberative Democracy
While I still plan to make a similar list for representative democracy, this section of the list is intentionally focused on democratic innovations, with a certain emphasis on citizens’ assemblies and deliberative modes of democracy. While this reflects my personal interests, it is also in part due to the recent surge of interest in citizens’ assemblies and other modes of deliberative democracy, and the academic production that followed.
With a newly elected President and the most fragmented Parliament in its history, Brazilian politics are likely headed for gridlock. Lottery could well be the solution.
Tiago Peixoto and Guilherme Lessa
For many Brazilians who recently cast their ballots to elect a new President, the choice was between the unacceptable and the scandalous. Mr. Bolsonaro, the winning candidate, received 39.3% of votes, while abstentions, null and blank votes accounted for 28.5%. A record 7.4% of votes were null, the largest percentage since Brazil’s transition to democracy in the late 1980’s. Considering that voting is compulsory in Brazil, these figures signal a deep and persistent disbelief in democracy as a means to improve the life of the average citizen. When asked about her preferred candidate at the polling station, it would not be unthinkable for a voter to respond “I’d rather randomly pick any Brazilian to run the country.”
The idea may seem absurd, or a symptom of the ideological schizophrenia that now ravages Brazil, where the two contenders for the highest office were diametrically opposed and their supporters’ main argument was “the other is worse.” Research conducted in the United States indicates that the electorate’s mistrust of their representatives is far from being a Brazilian idiosyncrasy: 43% of American voters state they would trust a group of people randomly selected through a lottery more than they trust elected members of the Executive or Legislative.
Many political scientists view this as a symptom of a global crisis of representation, a growing distance between representatives and the represented, both part of a machine mediated by parties that are disconnected from everyday life and often involved in corruption scandals. While political parties are suffering decreasing membership, political campaigns are increasingly dependent on large donations and mass media campaigns – all of which can be done without the engagement of everyday citizens. The disconnect between citizens and their representatives has driven the international success of candidates who claim to be political outsiders (even if they are not) and private sector meritocrats.
Representative democracy has always suffered from an inherent contradiction: electoral processes do not generate representative results. Think of the teacher who asks her students who wants to be the class representative. Only one or two students raise their hand. To be a representative does not require broad knowledge of the reality of the represented but, rather, an extroverted and sociable personality which, ultimately, lends itself to the role to be played. In the case of elections, the availability of time and money for campaigning, as well as support from the party machinery, are also predicting factors in who gets to run and, most importantly, who gets to win.
The bias generated by electoral processes can take several forms, but is particularly visible in terms of gender, race and income. For instance, despite high turnover in the Brazilian Legislative, the numbers remain disheartening. While half of the population is female, their participation in the House of Representatives stands at a meager 15%. Similarly, 75% of House members identify as white, compared to 44% of the Brazilian population. The mismatch is not unique to Brazil. As reported by Nicholas Carnes in his recent book The Cash Ceiling, in the United States, while millionaires represent only three percent of the American population, they are a majority in Congress. While working-class people make up half of US citizens, they only account for two percent of members of Congress.
The denial of politics as a symptom of this disconnect demonstrates the extent to which inclusiveness in politics matters, bringing about some worrisome consequences. Heroicexceptions aside, the election of new representatives generally fails to alter the propensity of the electoral machine to reproduce its own logic. The Brazilian electoral system, like that of other modern democracies, continues to produce legislative bodies that fail to represent the diversity of their electorate. Changing politicians does not necessarily imply changing politics.
Fixing this imbalance between the electorate and the elected is a complex matter with which many scholars of democracy have grappled. An increasingly popular proposal among political scientists is the use of lottery as a complementary means to select Legislative representatives. Proponents of this approach describe several advantages, of which three are worth highlighting. First, a body of representatives selected by lottery would be more representative of the population as a whole, resulting in agendas and policies that are more closely aligned with societal concerns. Second, the influence of money in campaigning – a constant source of scandal and corruption – would be eliminated. Finally, and in line with well-established research in the field of decision-making, a more diverse legislative body would be collectively smarter, generating decisions that could maximize the public good.
But how would this work in practice?
“Let’s hold a lottery!”, says the spokesperson for today’s miracle solution. Lottery, after all, does have its precedents in democracy’s formative history. For over a century in classical Athens, randomly selected citizens were responsible for important advances in legislation and public policy. Similarly, at its height, the Republic of Florence used lottery to allocate some of the most important positions in the Executive, Legislative and Judiciary. Today, several countries use juries composed of randomly selected citizens as a means to ensure impartiality and efficacy within the Judiciary.
Globally, we see hundreds of inspiring experiences in which randomly selected citizens deliberate on issues of public interest: in Ireland and Mongolia to guide constitutional reforms; in Canada to inform changes in electoral legislation; in Australia to develop public budgets; and in the United States to support citizens’ legislative initiatives.
Naturally, such a complex and somewhat unexpected proposal brings about a challenging question: how can it be implemented in a way that results in a more representative Legislative? Changing the rules of the game, as we all know, is not a trivial task. Political reform, even if thoroughly thought through, still depends on the approval of those who benefit the most from the status quo.
The proponents of lottery selection rarely advocate for the direct substitution of members of parliament by randomly selected citizens. Pragmatically, they usually call for the implementation of intermediary strategies, such as the use of citizens’ panels as complementary decision-making processes.
So why not try it?
It is an established fact that Mr. Bolsonaro will be faced with one of the most fragmented congresses in Brazilian history. While his initial popularity may allow the president-elect to pass reforms in the first few months of his mandate, decision paralysis and political gridlock seem inevitable in years to come.What risk, then, would a panel of randomly selected citizens with a voice and a vote in congressional committees dealing with specific policies such as environment and education pose? Like a jury, such a panel would dedicate its time to understanding the facts relating to the subject at hand, listen to different positions, formulate amendments and potentially cast votes on the most divisive issues. It would represent a microcosm of Brazilian public opinion in an environment that is informed, egalitarian and civilized. Although unlikely, such a reform could be the first step towards strengthening the (increasingly weak) link between representatives and the represented.
*Article translated and adapted from original, published in Revista E, ed. 2400, October 2018.
Voting in Rio Grande do Sul’s Participatory Budgeting (picture by Anderson Lopes)
Here are two new published papers that my colleagues Jon Mellon, Fredrik Sjoberg and myself have been working on.
The first, The Effect of Bureaucratic Responsiveness on Citizen Participation, published in Public Administration Review, is – to our knowledge – the first study to quantitatively assess at the individual level the often-assumed effect of government responsiveness on citizen engagement. It also describes an example of how the data provided through digital platforms may be leveraged to better understand participatory behavior. This is the fruit of a research collaboration with MySociety, to whom we are extremely thankful.
Below is the abstract:
What effect does bureaucratic responsiveness have on citizen participation? Since the 1940s, attitudinal measures of perceived efficacy have been used to explain participation. The authors develop a “calculus of participation” that incorporates objective efficacy—the extent to which an individual’s participation actually has an impact—and test the model against behavioral data from the online application Fix My Street (n = 399,364). A successful first experience using Fix My Street is associated with a 57 percent increase in the probability of an individual submitting a second report, and the experience of bureaucratic responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of responsiveness for fostering an active citizenry while demonstrating the value of incidentally collected data to examine participatory behavior at the individual level.
An earlier, ungated version of the paper can be found here.
The second paper, Does Online Voting Change the Outcome? Evidence from a Multi-mode Public Policy Referendum, has just been published in Electoral Studies. In an earlier JITP paper (ungated here) looking at Rio Grande do Sul State’s Participatory Budgeting – the world’s largest – we show that, when compared to offline voting, online voting tends to attract participants who are younger, male, of higher income and educational attainment, and more frequent social media users. Yet, one question remained: does the inclusion of new participants in the process with a different profile change the outcomes of the process (i.e. which projects are selected)? Below is the abstract of the paper.
Do online and offline voters differ in terms of policy preferences? The growth of Internet voting in recent years has opened up new channels of participation. Whether or not political outcomes change as a consequence of new modes of voting is an open question. Here we analyze all the votes cast both offline (n = 5.7 million) and online (n = 1.3 million) and compare the actual vote choices in a public policy referendum, the world’s largest participatory budgeting process, in Rio Grande do Sul in June 2014. In addition to examining aggregate outcomes, we also conducted two surveys to better understand the demographic profiles of who chooses to vote online and offline. We find that policy preferences of online and offline voters are no different, even though our data suggest important demographic differences between offline and online voters.
The extent to which these findings are transferable to other PB processes that combine online and offline voting remains an empirical question. In the meantime, nonetheless, these findings suggest a more nuanced view of the potential effects of digital channels as a supplementary means of engagement in participatory processes. I hope to share an ungated version of the paper in the coming days.
A major argument for democratic governance is that more citizen participation leads to better outcomes through an improved alignment between citizens’ preferences and policies. But how does that play out in practice? Looking at the effects of the introduction of electronic voting (EV) in Brazil, a paper by Thomas Fujiwara (Princeton) sheds light on this question. Entitled “Voting Technology, Political Responsiveness, and Infant Health: Evidence from Brazil” (2013), it is one of the best papers I’ve read when it comes to bringing together the issues of technology, participation and development outcomes.
Below is an extract from the paper:
This paper provides evidence on how improving political participation can lead to better service outcomes. It estimates the effects of an electronic voting, or EV, technology in reducing a mundane, but nonetheless important, obstacle to political participation: difficulty in operating ballots. The results indicate that EV caused a large de facto enfranchisement of less educated voters, which lead to the election of more left-wing state legislators, increased public health care spending, utilization (prenatal visits), and infant health (birth weight).
While filling out a ballot may be a trivial task to educated citizens in developed countries, the same is not true in Brazil, where 23% of adults are “unable to read or write a simple note” and 42% did not complete the 4th grade. Moreover, before 1994 Brazilian paper ballots required voters to write a candidate’s name or electoral number and involved only written instructions. This resulted in a substantial quantity of error-ridden and blank ballots being cast, generating a large number of residual votes (not assigned to a candidate and discarded from the tallying of results).
In the mid-1990’s, the Brazilian government developed an EV technologyas a substitute for paper ballots. While its introduction aimed at reducing the time and costs of voting counting, other features of the technology, such as the use of candidates’ photographs as visual aids, the use of “error” messages for voters about to cast residual votes, and guiding the voting process step by step, facilitated voting and reduced errors.
(…) Estimates indicate that EV reduced residual voting in state legislature elections by a magnitude larger than 10% of total turnout. Such effect implies that millions of citizens who would have their votes go uncounted when using a paper ballot were de facto enfranchised. Consistent with the hypothesis that these voters were more likely to be less educated, the effects are larger in municipalities with higher illiteracy rates. Moreover, EV raises the vote shares of left-wing parties.
The paper will go on to argue that this enfranchisement of the less educated citizenry did indeed affect public policy. (…) I focus on state government spending, in particular on an area that disproportionately affects the less educated: health care. Poorer Brazilians rely mostly on a public-funded system for health care services, which richer voters are substantially more likely to use the co-existing private services. The less educated have thus relatively stronger preferences for increased public health care provision, and political economy models predict that increasing their participation leads to higher public spending in this area.
Using data from birth records, I also find that EV raised the number of prenatal visits by women to health professionals and lowered the prevalence of low-weight births (below 2500g), and indicator of newborn health. Moreover, these results hold only for less educated mothers, and I find no effects for the more educated, supporting the interpretation that EV lead to benefits specifically targeted at poorer populations.
Fujiwara’s findings are great for a number of reasons, some of which I highlight below:
Participation and policy preferences: The findings in this paper support the argument for democratic governance, showing that an increase in the participation of poorer segments of society ultimately leads to better service results.
Institutions and context: The paper indirectly highlights how innovations are intrinsically linked to institutions and their context. For instance, as noted by Fujiwara, “the effect of EV is larger in the proportional representation races where a paper ballot requires writing down the name or number of the candidate (lower chamber of congress and state legislature) than in the plurality races where a paper ballot involves checking a box (senate, governor, and president).” In other words, the electoral system matters, and the Brazilian outcomes would be most likely to be replicated in countries with similar electoral processes (and levels of ballot complexity), rather than those adopting plurality voting systems. (If I remember well, this was one of the findings of a paper by Daniel Hidalgo (unpublished), comparing the effects of e-voting in Brazil and India: the effects of e-voting for elections in the lower house in India [plurality vote] were smaller than in Brazil). In a similar vein, the effects of the introduction of similar technology would probably be lower in places with higher levels of educational attainment within poor segments of society.
Technology and elections: Much of the work on technology and accountability evolves around non-electoral activities that are insulated from existing processes and institutions, which tends to mitigate the chances of real-life impact. And, whether you like it or not, elections remain one of the most pervasive and consequential processes involving citizen participation in public affairs. There seems to be untapped potential for the use of technology to leverage electoral processes (beyond partisan campaigns). Finding ways to better inform voters (e.g. voting advice applications) and to lower the barriers for entry in electoral competition (why not a Rock the Votefor unlikely candidates?) are some of the paths that could be further explored. Fujiwara’s paper show how technology can enhance development outcomes by building on top of existing institutions.
Technology and inclusion: For a number of people working with development and public policy, a major concern with technology is the risk of exclusion of marginalized groups. While that is a legitimate concern, this paper shows the opposite effect, reminding us that it is less about technology and more about the use that one makes of it.
Unintended effects: The use of technology in governance processes is full of stories of unintended effects. Most of them are negative ones, epitomized by the case of digitization of land records in Bangalore [PDF]: instead of transparency and efficiency, it led to increased corruption and inefficiencies. Fujiwara’s paper shows that unexpected benefits are also possible. While the primary goal of the introduction of e-voting in Brazil was related to costs and time, another major unanticipated impact was better service outcomes. If unintended effects are often overlooked by practitioners and researchers alike, this paper highlights the need to look for effects beyond those originally intended.
All of these points, added to the methodological approach adopted by Fujiwara, are good reasons to read the paper. You can find it here [PDF].
Rio Grande do Sul Participatory Budgeting Voting System (2014)
Within the open government debate, there is growing interest in the role of technology in citizen engagement. However, as interest in the subject grows, so does the superficiality of the conversations that follow. While the number of citizen engagement and technology events is increasing, the opportunities for in-depth conversations on the subject do not seem to be increasing at the same rate.
This is why, a few weeks ago, I was pleased to visit the University of Westminster for a kick-off talk on “Technology and Participation: Friend or Foe?”, organized by Involve and the Centre for the Study of Democracy (Westminster). It was a pleasure to start a conversation with a group that was willing to engage in a longer and more detailed conversation on the subject.
My talk covered a number of issues that have been keeping me busy recently. On the preliminary quantitative work that I presented, credit should also go to the awesome team that I am working with, which includes Fredrik Sjoberg (NYU), Jonathan Mellon (Oxford) and Paolo Spada (UBC / Harvard). For those who would like to see some of the graphs better, I have also added here [PDF] the slides of my presentation.
I have skipped the video to the beginning of my talk, but the discussion that followed is what made the event interesting. In my opinion, the contributions of Maria Nyberg (Head of Open Policy Making at the Cabinet Office) Catherine Howe (Public-i), as well as those of the participants, were a breath of fresh air in the current citizen engagement conversation. So please bear with me and watch until the end.
I would like to thank Simon Burral (Involve) and Graham Smith (Westminster) for their invitation. Simon leads the great work being done at Involve, one of the best organizations working on citizen engagement nowadays. And to keep it short, Graham is the leading thinker when the issue is democratic innovations.
Below is also an excellent summary by Sonia Bussu (Involve), capturing some of the main points of my talk and the discussion that ensued (originally posted here).
***
“On technology and democracy
The title of yesterday’s event, organised by Involve and Westminster University’s Centre for the Study of Democracy, posed a big question, which inevitably led to several other big questions, as the discussion among a lively audience of practitioners, academics and policymakers unfolded (offline and online).
Tiago Peixoto, from the World Bank, kicked off the debate and immediately put the enthusiasm for new technologies into perspective. Back in 1795, the very first model of the telegraph, the Napoleonic semaphore, raised hopes for – and fears of – greater citizen engagement in government. Similarly the invention of the TV sparked debates on whether technology would strengthen or weaken democracy, increasing citizen awareness or creating more opportunities for market and government manipulation of public opinion.
Throughout history, technological developments have marked societal changes, but has technological innovation translated into better democracy? What makes us excited today about technology and participation is the idea that by lowering the transaction costs we can increase people’s incentives to participate. Tiago argued that this costs-benefits rationale doesn’t explain why people continue to vote, since the odds of their vote making a difference are infinitesimal (to be fair voter turnouts are decreasing across most advanced democracies – although this is more a consequence of people’s increasing cynicism towards political elites rather than their understanding of mathematical probabilities).*
So do new technologies mobilise more people or simply normalise the participation of those that already participate? The findings on the matter are still conflicting. Tiago showed us some data on online voting in Rio Grande do Sul participatory budgeting process in Brazil, whereby e-voting would seem to bring in new voters (supporting the mobilisation hypothesis) but from the same social strata (e.g. higher income and education – as per the normalisation hypothesis).
In short, we’re still pretty much confused about the impact of technology on democracy and participation. Perhaps, as suggested by Tiago and Catherine Howe from Public-i, the problem is that we’re focusing too much on technology, tempted by the illusion it offers to simplify and make democracy easy. But the real issue lies elsewhere, in understanding people and policymakers’ incentives and the articulation (or lack thereof) between technologies and democratic institutions. As emphasised by Catherine, technology without democratic evolution is like “lipstick on a pig”.
The gap between institutions and technology is still a big obstacle. Catherine reminded us how participation often continues to translate into one-way communication in government’s engagement strategies, which constrains the potential of new technologies in facilitating greater interaction between citizens and institutions and coproduction of policies as a response to increasing complexity. As academics and practitioners pitch the benefits of meaningful participation to policy makers, Tiago asked whether a focus on instrumental incentives might help us move forward. Rather than always pointing to the normative argument of deepening democracy, we could start using data from cases of participatory budgeting to show how greater participation reduces tax evasion and corruption as well as infant mortality.
He also made a methodological point: we might need to start using more effectively the vast array of data on existing engagement platforms to understand incentives to participation and people’s motivation. We might get some surprises, as findings demystify old myths. Data from Fix My Street would seem to prove that government response to issues raised doesn’t increase the likelihood of future participation by as much as we would assume (28%).** But this is probably a more complicated story, and as pointed out by some people in the audience the nature and salience of both the issue and the response will make a crucial difference.
Catherine highlighted one key problem: when we talk about technology, we continue to get stuck on the application layer, but we really need to be looking at the architecture layer. A democratic push for government legislation over the architecture layer is crucial for preserving the Internet as a neutral space where deeper democracy can develop. Data is a big part of the architecture and there is little democratic control over it. An understanding of a virtual identity model that can help us protect and control our data is key for a genuinely democratic Internet.
Maria Nyberg, from the Cabinet Office, was very clear that technology is neither friend nor foe: like everything, it really depends on how we use it. Technology is all around us and can’t be peripheral to policy making. It offers great opportunities to civil servants as they can tap into data and resources they didn’t have access to before. There is a recognition from government that it doesn’t have the monopoly on solutions and doesn’t always know best. The call is for more open policy making, engaging in a more creative and collaborative manner. Technology can allow for better and faster engagement with people, but there is no silver bullet.
Some people in the audience felt that the drive for online democracy should be citizen-led, as the internet could become the equivalent of a “bloodless guillotine” for politicians. But without net neutrality and citizen control over our own data there might be little space for genuine participation.
*This point was edited on 12/07/2014 following a conversation with Tiago.
** This point was edited on 12/07/2014 following a conversation with Tiago.”
—————————
I am also thankful to the UK Political Studies Association (PSA), Involve and the University of Westminster for co-sponsoring my travel to the UK. I will write more later on about the Scaling and Innovation Conference organized by the PSA, where I was honored to be one of the keynote speakers along with MP Chi Onwurah (Shadow Cabinet Office Minister) and Professor Stephen Coleman (Leeds).
A little while ago I mentioned the launch of the Portuguese version of the book organized by Nelson Dias, “Hope for Democracy: 25 Years of Participatory Budgeting Worldwide”.
The good news is that the English version is finally out. Here’s an excerpt from the introduction:
This book represents the effort of more than forty authors and many other direct and indirect contributions that spread across different continents seek to provide an overview on the Participatory Budgeting (PB) in the World. They do so from different backgrounds. Some are researchers, others are consultants, and others are activists connected to several groups and social movements. The texts reflect this diversity of approaches and perspectives well, and we do not try to influence that.
(….)
The pages that follow are an invitation to a fascinating journey on the path of democratic innovation in very diverse cultural, political, social and administrative settings. From North America to Asia, Oceania to Europe, from Latin America to Africa, the reader will find many reasons to closely follow the proposals of the different authors.
While my perception may be biased, I believe this book will be a major contribution for researchers and practitioners in the field of participatory budgeting and citizen engagement in general. Congratulations to Nelson Dias and all the others who contributed their time and energy.