Reflections on the representativeness of citizens’ assemblies and similar innovations

(Co-authored with Paolo Spada)

Introduction

For proponents of deliberative democracy, the last couple of years could not have been better. Propelled by the recent diffusion of citizens’ assemblies, deliberative democracy has definitely gained popularity beyond small circles of scholars and advocates. From CNN to the New York Times, the Hindustan Times (India), Folha de São Paulo (Brazil), and Expresso (Portugal), it is now almost difficult to keep up with all the interest in democratic models that promote the random selection of participants who engage in informed deliberation. A new “deliberative wave” is definitely here.

But with popularity comes scrutiny. And whether the deliberative wave will power new energy or crash onto the beach, is an open question. As is the case with any democratic innovation (institutions designed to improve or deepen our existing democratic systems), critically examining assumptions is what allows for management of expectations and, most importantly, gradual improvements.

Proponents of citizens’ assemblies put representativeness at the core of their definition. In fact, it is one of their main selling points. For example, a comprehensive report highlights that an advantage of citizens’ assemblies, compared to other mechanisms of participatory democracy, is their typical combination of random selection and stratification to form a public body that is “representative of the public.” This general argument resonates with the media and the wider public. A recent illustration is an article by The Guardian, which depicts citizens’ assemblies as “a group of people who are randomly selected and reflect the demographics of the population as a whole”

It should be noted that claims of representativeness vary in their assertiveness. For instance, some may refer to citizens’ assemblies as “representative deliberative democracy,” while others may use more cautious language, referring to assemblies’ participants as being “broadly representative” of the population (e.g. by gender, age, education, attitudes). This variation in terms used to describe representativeness should prompt an attentive observer to ask basic questions such as: “Are existing practices of deliberative democracy representative?” “If they are ‘broadly’ representative, how representative are they?” “What criteria, if any, are used to assess whether a deliberative democracy practice is more or less representative of the population?” “Can their representativeness be improved, and if so, how?” These are basic questions that, surprisingly, have been given little attention in recent debates surrounding deliberative democracy. The purpose of this article is to bring attention to these basic questions and to provide initial answers and potential avenues for future research and practice.

Citizens Assemblies and three challenges of random sampling

Before discussing the subject of representativeness, it is important to provide some conceptual clarity. From an academic perspective, citizens’ assemblies are a variant of what political scientists normally refer to as “mini-publics.” These are processes in which participants: 1) are randomly selected (often combined with some form of stratification), 2) participate in informed deliberation on a specific topic, and 3) reach a public judgment and provide recommendations on that topic. Thus, in this text, mini-publics serves as a general term for a variety of practices such as consensus conferences, citizens’ juries, planning cells, and citizens’ assemblies themselves.

In this discussion, we will focus on what we consider to be the three main challenges of random sampling. First, we will examine the issue of sample size and the limitations of stratification in addressing this challenge. Second, we will focus on sampling error, which is the error that occurs when observing a sample rather than the entire population. Third, we will examine the issue of non-response, and how the typically small sample size of citizens’ assemblies exacerbates this problem. We conclude by offering alternatives to approach the trade-offs associated with mini-publics’ representativeness dilemma.

  1. Minimal sample size, and why stratification does not help reducing sample size requirements in complex populations 

Most mini-publics that we know of have a sample size of around 70 participants or less, with a few cases having more than 200 participants. However, even with a sample size of 200 people, representing a population accurately is quite difficult. This may be the reason why political scientist Robert Dahl, who first proposed the use of mini-publics over three decades ago, suggested a sample size of 1000 participants. This is also the reason why most surveys that attempt to represent a complex national population have a sample size of over 1000 people. 

To understand why representing a population accurately is difficult, consider that a sample size of approximately 370 individuals is enough to estimate a parameter of a population of 20,000 with a 5% error margin and 95% confidence level (for example, estimating the proportion of the population that answers “yes” to a question). However, if the desired error margin is reduced to 2%, the sample size increases to over 2,000, and for a more realistic population of over 1 million, a sample size of over 16,000 is required to achieve a 1% error margin with 99% confidence. Although the size of the sample required to estimate simple parameters in surveys does not increase significantly with the size of the population, it still increases beyond the sample sizes currently used in most mini-publics. Sample size calculators are available online to demonstrate these examples without requiring any statistical knowledge. 

Stratification is a strategy that can help reduce the error margin and achieve better precision with a fixed sample size. However, stratification alone cannot justify the very small sample sizes that are currently used in most mini-publics (70 or less).

To understand why, let’s consider that we want to create a sample that represents the five important strata of the population and includes all their intersections, such as ethnicity, age, income, geographical location, and gender. For simplicity, let’s assume that the first four categories have five equal groups in society, and gender is composed of two equal groups. The minimal sample required to include the intersections of all the strata and represent this population is equal to 5^4×2=1250. Note that we have maintained the somewhat unlikely assumption that all categories have equal size. If one stratum, such as ethnicity, includes a minority that is 1/10 of the population, then our multiplier would be 10 instead of 5, requiring a sample size of 5^3x10x2=2500.

The latter is independent of the number of categories within the strata, so even if the strata have only two categories, one comprising 90% (9/10) of the population and one comprising 10% (1/10) of the population, the multiplier would still be 10. When we want to represent a minority of 1% (1/100) of the population, the multiplier becomes 100. Note that this minimal sample size would include the intersection of all the strata in such a population, but such a small sample will not be representative of each stratum. To achieve stratum-level representation, we need to increase the number of people for each stratum following the same mathematical rules we used for simple sampling, as described at the beginning of this section, generating a required sample size in the order of hundreds of thousand of people (in our example above 370×2500=925000).

This is without even entering into the discussion of what should be the ideal set of strata to be used in order to achieve legitimacy. Should we also include attitudes such as liberal vs conservative? Opinions on the topic of the assembly? Metrics of type of personality? Education? Income? Previous level of engagement in politics? In sum, the more complex the population is, the larger the sample required to represent it.

  1. Sampling error due to a lack of a clear population list

When evaluating sampling methods, it is important to consider that creating a random sample of a population requires a starting population to draw from. In some fields, the total population is well-defined and data is readily available (e.g. students in a school, members of parliament), but in other cases such as a city or country, it becomes more complicated.

The literature on surveys contains multiple publications on sampling issues, but for our purposes, it is sufficient to note that without a police state or similar means of collecting an unprecedented amount of information on citizens, creating a complete list of people in a country to draw our sample from is impossible. All existing lists (e.g. electoral lists, telephone lists, addresses, social security numbers) are incomplete and biased.

This is why survey companies charge significant amounts of money to allow customers to use their model of the population, which is a combination of multiple subsamples that have been optimized over time to answer specific questions. For example, a survey company that specializes in election forecasting will have a sampling model optimized to minimize errors in estimating parameters of the population that might be relevant for electoral studies, while a company that specializes in retail marketing will have a model optimized to minimize forecasting errors in predicting sales of different types of goods. Each model will draw from different samples, applying different weights according to complex algorithms that are optimized against past performance. However, each model will still be an imperfect representation of the population.

Therefore, even the best possible sampling method will have an inherent error. It is difficult, if not impossible, to perfectly capture the entire population, so our samples will be drawn from a subpopulation that carries biases. This problem is further accentuated for low-cost mini-publics that cannot afford expensive survey companies or do not have access to large public lists like electoral or census lists. These mini-publics may have a very narrow and biased initial subpopulation, such as only targeting members of an online community, which brings its own set of biases.

  1. Non-response

A third factor, well-known among practitioners and community organizers, is the fact that receiving an invitation to participate does not mean a person will take part in the process. Thus, any invitation procedure has issues of non-participation. This is probably the most obvious factor that prevents one from creating representative samples of the population. In mini-publics with large samples of participants, such as Citizens’ Assemblies, the conversion rate is often quite low, sometimes less than 10%. By conversion rate, we mean the percentage of the people contacted that say that they are willing to participate and enter the recruitment pool. Simpler mini-publics of shorter duration (e.g. one weekend) often achieve higher engagement. A dataset on conversion rates of mini-publics does not exist, but our own experience in organizing Citizens Assemblies, Deliberative Polls, and clones tell us that it is possible to achieve more than 20% conversion when the topic is very controversial. For example, in the UK’s Citizens’ Assembly on Brexit in 2017, 1,155 people agreed to enter the recruitment pool out of the 5,000 contacted, generating a conversion rate of 23.1%, as illustrated below.[1] 

Figure 1: Contact and recruitment numbers UK’s Citizens Assembly on Brexit (Renwick et al. 2017) 

We do not pretend to know all the existing cases, and so this data should be taken with caution. Maybe there have been cases with 80% conversion, given it is possible to achieve such rates in surveys. But even in such hypothetical best practices, we would have failed to engage 20% of the population. More realistically, with 10 to 30% engagement, we are just engaging a very narrow subset of the population.

Frequent asked questions, and why we should not abandon sortition

It is clear from the points above that the assertion that the current generation of relatively small mini-publics is representative of the population from which it is drawn is questionable. Not surprisingly, the fact that participants of mini-publics differ from the population they are supposed to represent has already been documented over a decade ago.[2] However, in our experience, when confronted with these facts, practitioners and advocates of mini-publics often raise various questions. Below, we address five frequently asked questions and provide answers for them.

  1. “But people use random sampling for surveys and then claim that the results are representative, what is the difference for mini-publics?”

The first difference we already discussed between surveys and mini-publics is that surveys that aim to represent a large population use larger samples. 

The second difference, less obvious, is that a mini-public is not a system that aggregates fixed opinions. Rather, one of the core principles of mini-publics is that participants deliberate and their opinions may change as a result of the group process and composition. Our sampling procedures, however, are based on the task of estimating population parameters, not generating input for legitimate decision making. While a 5% error margin with 95% confidence level may be acceptable in a survey investigating the proportion of people who prefer one policy over another, this same measure cannot be applied to a mini-public because participants may change their opinions through the deliberation process. A mini-public is not an estimate derived from a simple mathematical formula, but rather a complex process of group deliberation that may transform input preferences into output preferences and potentially lead to important decisions. Christina Lafont has used a similar argument to criticize even an ideal sample that achieves perfect input representativeness.[3] 

  1. “But we use random assignment for experiments and then claim that the results are representative, what is the difference for mini-publics?”

Mini-publics can be thought of as experiments, similar to clinical trials testing the impact of a vaccine. This approach allows us to evaluate the impact of a mini-public on a subset of the population, providing insight into what would happen if a similar subset of the population were to deliberate. Continuing this metaphor, if the mini-public participants co-design a new policy solution and support its implementation, any similar subsets of the population going through an identical mini-public process should generate a similar output.

However, clinical trials require that the vaccine and a placebo be randomly assigned to treatment and control groups. This approach is only valid if the participants are drawn from a representative sample and cannot self-select into each experimental arm.

Unfortunately, few mini-publics compare the decisions made by members to those who were not selected, and this is not considered a key element for claiming representativeness or legitimacy. Furthermore, while random assignment of treatment and control is crucial for internal validity, it does not guarantee external validity. That is, the results may not be representative of the larger population, and the estimate of the treatment effect only applies to the specific sample used in the experiment. 

While the metaphor of the experiment as a model to interpret mini-publics is preferable to the metaphor of the survey, it does not solve the issue of working with non-representative samples in practice. Therefore, we must continue to explore ways to improve the representativeness of mini-publics and take into account the limitations of the experimental metaphor when designing and interpreting their results.

  1. “Ok, mini-publics may not be perfect, but are they not clearly better than other mechanisms?”

Thus far, we have provided evidence that the claim of mini-publics as representative of the population is problematic. But what about more cautious claims, such as mini-publics being more inclusive than other participatory processes (e.g., participatory budgeting, e-petitions) that do not employ randomization? Many would agree that traditional forms of consultation tend to attract “usual suspects” – citizens who have a higher interest in politics, more spare time, higher education, enjoy talking in public, and sometimes enjoy any opportunity to criticize. In the US, for instance, these citizens are often older white males, or as put by a practitioner once, “the male, pale and stale.” A typical mini-public instead manages to engage a more diverse set of participants than traditional consultations. While this is an obvious reality, the engagement strategies of mini-publics compared to traditional consultations based on self-selection have very different levels of sophistication and costs. Mini-publics tend to invest more resources in engagement, sometimes tens of thousands of dollars, and thus we cannot exclude that existing results in terms of inclusion are purely due to better outreach techniques, such as mass recruitment campaigns and stipends for the participants.

Therefore, it is not fair to compare traditional consultations to mini-publics. As it is not fair to compare mini-publics that are not specifically designed to include marginalized populations to open-to-all processes that are specifically designed for this purpose. The classic critique of feminist, intersectional and social movement scholars that mini-publics design does not consider existing inequalities, and thus is inferior to dedicated processes of minority engagement is valid in that case. This is because the amount dedicated to engagement is positively correlated with inclusion. For instance, processes specifically designed for immigrants and native populations will have more inclusive results than a general random selection strategy that does not have specific quotas for these groups and engagement strategies for them.

We talk past one another when we try to rank processes with respect to their supposed inclusion performance without considering the impact of the resources dedicated to engagement or their intended effects (e.g. redistribution, collective action).

It is also difficult to determine which approach is more inclusive without a significant amount of research comparing different participatory methods with similar outreach and resources. As far as we know, the only study that compares two similar processes – one using random engagement and the other using an open-to-all invitation – found little difference in inclusiveness.[4] It also highlighted the importance of other factors such as the design of the process, potential political impact, and the topic of discussion. Many practitioners do not take these factors into account, and instead focus solely on recruitment strategies. While one study is not enough to make a conclusive judgment, it does suggest that the assumption that mini-publics using randomly selected participants are automatically more inclusive than open-to-all processes is problematic.

  1. “But what about the ergonomics of the process and deliberative quality? Small mini-publics are undeniably superior to large open-to-all meetings.”

One of the frequently advertised advantages of small mini-publics is their capacity to support high-quality deliberation and include all members of the sample in the discussion. This is a very clear advantage; however, it has nothing to do with random sampling. It is not difficult to imagine a system in which an open-to-all meeting is called and then such a meeting selects a smaller number of representatives that will proceed to discuss using high-quality deliberative procedures. The selection rule could include quotas so that the selected members respect criteria of diversity of interest (even though, as we argued before, that would not be representative of the entire group). The ergonomics and inclusion advantages are purely linked with the size of the assembly and the process used to support deliberation.

  1. “So, are you saying we should abandon sortition?”

We hope that it is now clearer why we contend that it is conceptually erroneous to defend the application of sortition in mini-publics based on their statistical representation of the population. So, should sortition be abandoned? Our position is that it should not, and for one less obvious and counterintuitive argument in favor of random sampling: it offers a fair way to exclude certain groups from the mini-public. This is particularly so because, in certain cases, participatory mechanisms based on self-selection may be captured by organized minorities to the detriment of disengaged majorities.

Consider, for instance, one of President Obama’s first attempts to engage citizens at large-scale, the White House’s online town-hall. Through a platform named “open for questions,” citizens were able to submit questions to Obama and vote for which questions they would like to be answered by him. Over 92,000 people posted questions, and about 3.6 million votes were cast for and against those questions. Under the section “budget” of the questions, seven of the ten most popular queries were about legalizing marijuana, many of which were about taxing it. The popularity of this issue was attributed to a campaign led by NORML, an organization advocating for pot legalization. While the cause and ideas may be laudable, it is fair to assume that this was hardly the biggest budgetary concern of Americans in the aftermath of an economic downturn.

(Picture by Pete Souza, Wikimedia Commons)

In a case like the White House’s town-hall, the randomization of people to participate would be a fair and effective way to avoid the capture of the dialogue by organized groups. Randomization does not completely exclude the possibility of capture of a deliberative space, but it does increase the costs of doing so. The probability that members of an organized minority are randomly sampled to participate in a mini-public is minor, therefore the odds of their presence in the mini-public will be minor. Thus, even if we had a technological solution capable of organizing large-scale deliberation in the millions, a randomization strategy could still be an effective means to protect deliberation from the capture by organized minorities. A legitimate method of exclusion will remain an asset – at least until we have another legitimate way to mitigate the ability of small, organized minorities to bias deliberation.

The way forward for mini-publics: go big or go home?

There is clearly a case for increasing the size of mini-publics to improve their ability to represent the population. But there is also a trade-off between the size of the assembly and the cost required to sustain high-quality deliberation. With sizes approaching 1000 people, hundreds of moderators will be required and much of the exchange of information will occur not through synchronous exchanges in small groups, but through asynchronous transmission mechanisms across the groups. This is not necessarily a bad thing, but it will have the typical limitations of any type of aggregation mechanism that requires participant attention and effort. For example, in an ideation process with 100 groups of 10 people each, where each group proposes one idea and then discusses all other ideas, each group would have to discuss 100 ideas. This is a very intense task. However, there could be filtering mechanisms that require subgroups to eliminate non-interesting ideas, and other solutions designed to reduce the amount of effort required by participants.

All else being equal, as the size of the assembly grows, the logistical complexity and associated costs increases. At the same time, the ability to analyze and integrate all the information generated by participants diminishes. The question of whether established technologies like argument mapping, or even emerging artificial intelligence could help overcome the challenges associated with mass deliberation is an empirical one – but it’s certainly an avenue worth exploring through experiments and research. Recent designs of permanent mini-publics such as the one adopted in Belgium (Ostbelgien, Brussels) and Italy (Milan) that resample a small new group of participants every year could attempt to include over time a sufficiently large sample of the population to achieve a good level of representation, at least for some strata of the population, and as long as systematic sampling errors are corrected, and obvious caveats in terms of representativeness are clearly communicated.

Another approach is to abandon the idea of achieving representativeness and instead target specific problems of inclusion. This is a small change in the current approach to mini-publics, but in our opinion, it will generate significant returns in terms of long-term legitimacy. Instead of justifying a mini-public through a blanket claim of representation, the justification in this model would emerge from a specific failure in inclusion. For example, imagine that neighborhood-level urban planning meetings in a city consistently fail to involve renters and disproportionately engage developers and business owners. In such a scenario, a stratified random sample approach that reserves quotas for renters and includes specific incentives to attract them, and not the other types of participants, would be a fair strategy to prevent domination. However, note that this approach is only feasible after a clear inclusion failure has been detected.

In conclusion, from a democratic innovations’ perspective, there seems to be two productive directions for mini-publics: increasing their size or focusing on addressing failures of inclusiveness. Expanding the size of assemblies involves technical challenges and increased costs, but in certain cases it might be worth the effort. Addressing specific cases of exclusion, such as domination by organized minorities, may be a more practical and scalable approach. This second approach might not seem very appealing at first. But one should not be discouraged by our unglamorous example of fixing urban planning meetings. In fact, this approach is particularly attractive given that inclusion failures can be found across multiple spaces meant to be democratic – from neighborhood meetings to parliaments around the globe.

For mini-public practitioners and advocates like ourselves, this should come as a comfort: there’s no shortage of work to be done. But we might be more successful if, in the meantime, we shift the focus away from the representativeness claim.

****************

We would like to express our gratitude to Amy Chamberlain, Andrea Felicetti, Luke Jordan, Jon Mellon, Martina Patone, Thamy Pogrebinschi, Hollie Russon Gilman, Tom Steinberg, and Anthony Zacharewski for their valuable feedback on previous versions of this post.


[1] Renwick, A., Allan, S., Jennings, W., McKee, R., Russell, M. and Smith, G., 2017. A Considered Public Voice on Brexit: The Report of the Citizens’ Assembly on Brexit.

[2] Goidel, R., Freeman, C., Procopio, S., & Zewe, C. (2008). Who participates in the ‘public square’ and does it matter? Public Opinion Quarterly, 72, 792- 803. doi: 10.1093/poq/nfn043

[3] Lafont, C., 2015. Deliberation, participation, and democratic legitimacy: Should deliberative mini‐publics shape public policy?. Journal of political philosophy, 23(1), pp.40-63.

[4] Griffin J. & Abdel-Monem T. & Tomkins A. & Richardson A. & Jorgensen S., (2015) “Understanding Participant Representativeness in Deliberative Events: A Case Study Comparing Probability and Non-Probability Recruitment Strategies”, Journal of Public Deliberation 11(1). doi: https://doi.org/10.16997/jdd.221

The haves and the have-nots: who benefits from civic tech?

Photo by Lewis Nguyen on Unsplash

Civic tech” broadly refers to the use of digital technologies to support a range of citizen engagement processes. From allowing individuals to report problems to local government to enabling the crowdsourcing of national legislation, civic tech aims to promote better policies and services  – while contributing to more inclusive democratic institutions. But could civic tech affect public issues in a way that benefits some and excludes others?

Over the decades, the question of who participates in and who is excluded from participation mediated by technology has been the focus of both civic tech critics and proponents. The latter tend to argue that, by enabling citizens to participate without constraints of time and distance, civic tech facilitates the participation of those who usually abstain from engaging with public issues, leading to more inclusive processes. Critics argue that, given the existing digital divide, unequal access to technology will tend to empower the already empowered, further deepening societal differences. Yet both critics and proponents do tend to share an intuitive assumption: the socio-economic profile of who participates is the primary determinant of who benefits from digitally mediated civic participation. For instance, if more men participate, outcomes will favor male preferences, and if more young people participate, outcomes will be more aligned with the concerns of the youth.

In a new paper, we show that the link between the demographics of those participating through digital channels, and the beneficiaries of the participation process, is not necessarily as straightforward as commonly assumed. We review four civic tech cases where data allow us to trace the full participatory chain through:

  1. the initial digital divide
  2. the participant’s demographics
  3. the demands made through the process
  4. the policy outcomes

We examine online voting in the Brazilian state of Rio Grande do Sul’s participatory budgeting process, the local problem reporting platform Fix My Street (FMS) in the United Kingdom, Iceland’s online crowdsourced constitution process, and the global petitioning platform Change.org.

Counterintuitive findings

Change.org has been used by nearly half a billion people around the globe. Using a dataset of 3.9 million signers of online petitions in 132 countries, we examine the number of successful petitions and assess whether petitions created by women have more success than those submitted by men. Our analysis shows that, even if women create fewer online petitions than men, their petitions are more likely to be successful. All else equal, when online petitions have an impact on government policy, the agenda being implemented is much closer to the issues women choose to focus on.

In Rio Grande do Sul’s digital participatory budgeting (PB), we show that despite important demographic differences between online and offline voters, these inequalities do not affect which types of projects are selected for funding – a consequence of PB’s unique institutional design, which favors redistributive effects. 

In fact, of all the cases analyzed, none reflect the standard assumption that inequalities in who participates translate directly into inequalities in who benefits from the policy outcomes. Our results suggest that the socio-economic profile of participants predicts only in part who benefits from civic tech. Just as important to policy outcomes is how the platform translates civic participation into policy demands, and how the government responds to those demands. While civic tech practitioners pay a lot of attention to design from a technological perspective, our findings highlight the importance of considering how civic tech platforms function as political institutions that encourage certain types of behavior while discouraging others.

Civic tech, it seems, is not inherently good nor bad for democratic institutions. Instead, its effect is a combination of who participates on digital platforms and the choices of platform designers and governments.

***

Post co-authored with Jonathan Mellon and Fredrik M. Sjoberg. Cross-posted from the World Bank’s Let’s Talk Development blog.

Voices in the Code: Citizen Participation for Better Algorithms

Image by mohamed Hassan from Pixabay

Voices in the Code, by David G. Robinson, is finally out. I had the opportunity to read the book prior to its publication, and I could not recommend it enough. David shows how, between 2004 and 2014 in the US, experts and citizens came together to build a new kidney transplant matching algorithm. David’s work is a breath of fresh air for the debate surrounding the impact of algorithms on individuals and societies – a debate typically focused on the negative and sometimes disastrous effects of algorithms. While David conveys these risks at the outset of the book, focusing solely on these threats would add little to a public discourse already saturated with concerns. 

One of the major missing pieces in the “algorithmic literature” is precisely how citizens, experts and decision-makers can make their interactions more successful, working towards algorithmic solutions that better serve societal goals. The book offers a detailed and compelling case where a long and participatory process leads to the crafting of an algorithm that delivers a public good. This, despite the technical complexities, moral dilemmas, and difficult trade-offs involved in decisions related to the allocation of kidneys to transplant patients. Such a feat would not be achieved without another contribution of the book, which is to offer a didactical demystification of what algorithms are, normally treated as a reserved domain of few experts.

As David conducts his analysis, one also finds an interesting reversal of the assumed relationship between technology and participatory democracy. This relationship has mostly been examined from a civic tech angle, focusing on how technologies can support democratic participation through practices such as e-petitions, online citizens’ assemblies, and digital participatory budgeting. Thus, another original contribution of this book is to look at this relationship from the opposite angle: how can participatory processes better support technological deployments. While technology for participation (civic tech) remains an important topic, we should probably start paying more attention to how participation can support technological solutions (civic for tech).           

Continuing on through the book, other interesting insights emerge. For instance, technology and participatory democracy pundits normally subscribe to the virtues of decentralized systems, both from a technological and institutional perspective. Yet David depicts precisely the virtues of a decision-making system centralized at the national level. Should organ transplant issues be decided at the local level in the US, the results would probably not be as successful. Against intuition, David presents a clear case where centralized (although participatory) systems might offer better collective outcomes. Surfacing this counterintuitive finding is a welcome contribution to debates on the trade-offs between centralization and decentralization, both from a technological and institutional standpoint. 

But a few paragraphs here cannot do the book justice. Voices in the Code is certainly a must-read for anybody working on issues ranging from institutional design and participatory democracy, all the way to algorithmic accountability and decision support systems.

***

P.s. As an intro to the book, here’s a nice 10 min. conversation with David on the Marketplace podcast.

Easter readings: new selection of articles and notes on democracy, open government, civic tech and others

Open government’s uncertain effects and the Biden opportunity: what now? 

A review of 10 years of open government research reveals: 1) “a transparency-driven focus”,  2) “methodological concerns”, and 3) [maybe not surprising] “the lack of empirical evidence regarding the effects of open government”. My take on this is that these findings are, somewhat, self-reinforcing. 

First, the early focus on transparency by open government advocates, while ignoring the conditions under which transparency could lead to public goods, should be, in part, to blame. This is even more so if open government interventions insist on tactical, instead of strategic approaches to accountability. Second, the fact that many of those engaging in open government efforts do not take into account the existing evidence doesn’t help in terms of designing appropriate reforms, nor in terms of calibrating expectations. Proof of this is the recurrent and mostly unsubstantiated spiel that “transparency leads to trust”, voiced by individuals and organizations who should have known better. Third, should there be any effects of open government reforms, these are hard to verify in a credible manner given that evaluations often suffer from methodological weaknesses, as indicated by the paper.

Finally, open government’s semantic extravaganza makes building critical mass all the more difficult. For example, I have my doubts over whether the paper would reach similar conclusions should it have expanded the review to open government practices that, in the literature, are not normally labeled as open government. This would be the case, for instance, of participatory budgeting (which has shown to improve service delivery and increase tax revenues), or strategic approaches to social accountability that present substantial results in terms of development outcomes.  

In any case, the research findings are still troubling. The election of President Biden gives some extra oxygen to the open government agenda, and that is great news. But in a context where autocratization turns viral, making a dent in how governments operate will take less  policy-based evidence searching and more evidence-based strategizing. That involves leveraging the existing evidence when it is available, and when it is not, the standard path applies: more research is needed.

Open Government Partnership and Justice

On another note, Joe Foti, from the Open Government Partnership (OGP), writes on the need to engage more lawyers, judges and advocates in order to increase the number of accountability-focused OGP commitments. I particularly like Joe’s ideas on bringing these actors together to identify where OGP commitments could be stronger, and how. This resonates with a number of cases I’ve come across in the past where the judiciary played a key role in ensuring that citizens’ voice also had teeth. 

I also share Joe’s enthusiasm for the potential of a new generation of commitments that put forward initiatives such as specialized anti-corruption courts and anti-SLAPP provisions. Having said this, the judiciary itself needs to be open, independent and capable. In most countries that I’ve worked in, a good part of open government reforms fail precisely because of a dysfunctional judiciary system. 

Diversity, collective intelligence and deliberative democracy 

Part of the justification for models of deliberative democracy is their epistemic quality, that is, large and diverse crowds are smarter than the (elected or selected) few. A good part of this argument finds its empirical basis in the fantastic work by Scott Page.

But that’s not all. We know, for instance, that gender diversity on corporate boards improves firms’ performance, ethnic diversity produces more impactful scientific research, diverse groups are better at solving crimes, popular juries are less biased than professional judges, and politically diverse editorial teams produce higher-quality Wikipedia articles. Diversity also helps to explain classical Athens’ striking superiority vis-à-vis other city-states of its time, due to the capacity of its democratic system to leverage the dispersed knowledge of its citizens through sortition.

Now, a Nature article, “Algorithmic and human prediction of success in human collaboration from visual features”, presents new evidence of the power of diversity in problem-solving tasks. In the paper, the authors examine the patterns of group success in Escape The Room, an adventure game in which a group attempts to escape a maze by collectively solving a series of puzzles. The authors find that groups that are larger, older and more gender diverse are significantly more likely to escape. But there’s an exception to that: more age diverse groups are less likely to escape. Intriguing isn’t it? 

Deliberative processes online: rough review of the evidence

As the pandemic pushes more deliberative exercises online, researchers and practitioners start to take more seriously the question of how effective online deliberation can be when compared to in-person processes. Surprisingly, there are very few empirical studies comparing the two methods.

But a quick run through the literature offers some interesting insights. For instance, an online 2004 deliberative poll on U.S. foreign policy, and a traditional face-to-face deliberative poll conducted in parallel, presented remarkably similar results. A 2007 experiment comparing online and face-to-face deliberation found that both approaches can increase participants’ issue knowledge, political efficacy, and willingness to participate in politics. A similar comparison from 2009 looking at deliberation over the construction of a power plant in Finland found considerable resemblance in the outcomes of online and face-to-face processes. A study published in 2012 on waste treatment in France found that, compared to the offline process, online deliberation was more likely to: i) increase women’s interventions, ii) promote the justification or arguments, and iii) be oriented towards the common good (although in this case the processes were not similar in design). 

The external validity of these findings, however encouraging they may be, remains an empirical question. Particularly given that since these studies were conducted the technology used to support deliberations has in many cases changed (e.g. from written to “zoomified” deliberations).  Anyhow, kudos should go to the researchers who started engaging with the subject well over a decade ago: if that work was a niche subject then, their importance now is blatantly obvious. 

(BTW, on a related issue, here’s a fascinating 2021 experiment examining whether online juries can make consistent, repeatable decisions: interestingly, deliberating groups are much more consistent than non-deliberating groups)

Fixing the Internet? 

Anne Applebaum and Peter Pomerantsev published a great article in The Atlantic on the challenges to democracy by an Internet model that fuels disinformation and polarization, presenting alternative paths to address this. I was thankful for the opportunity to make a modest contribution to such a nice piece.  

At the same time, an excellent Twitter thread by Levi Boxel is a good reminder that sometimes we may be overestimating some of the effects of the Internet on polarization. Levi highlights three stylized facts with regards to mass polarization: i) it’s been increasing since at least the 1980’s in the US, ii) it’s been increasing more quickly among old age groups in the US, and iii) in the past 30 years countries present different patterns of polarization despite similar Internet usage.

Of course, that doesn’t mean we shouldn’t be concerned about the effects of the Internet in politics. For instance, a new study in the American Political Science Review finds that radical right parties benefit more than any other parties from malicious bots on social media. 

Open democracy

2021 continues to be a good year for the proponents of deliberative democracy, with growing coverage of the subject in the mainstream media, in part fueled by the recent launch of Helène Landemore’s great book “Open Democracy.” Looking for something to listen to? Look no further and listen to this interview by Ezra Klein with Helène.

A dialogue among giants 

The recording of the roundtable Contours of Participatory Democracy in the 21st Century is now available. The conversation between Jane Mansbridge, Mark Warren and Cristina Lafont can be found here

Democracy and design thinking 

Speaking of giants, the new book by Michael Saward “Democratic Design”, is finally out. I’m a big fan of Michael’s work, so my recommendation may be biased. In this new book Michael brings design thinking together with democratic theory and practice. If the design of democratic institutions is one of your topics, you should definitely check it out!   

Civic Tech 

I was thrilled to have the opportunity to deliver a lecture at the Center for Collective Learning – Artificial and Natural Intelligence Institute. My presentation, Civic Technologies: Past, Present and Future, can be found here.

Scholar articles: 

And finally, for those who really want to geek-out, a list of 15 academic articles I enjoyed reading:

Protzer, E. S. (2021). Social Mobility Explains Populism, Not Inequality or Culture. CID Research Fellow and Graduate Student Working Paper Series.

Becher, M., & Stegmueller, D. (2021). Reducing Unequal Representation: The Impact of Labor Unions on Legislative Responsiveness in the US Congress. Perspectives on Politics, 19(1), 92-109.

Foster, D., & Warren, J. (2021). The politics of spatial policies. Available at SSRN 3768213.

Hanretty, C. (2021). The Pork Barrel Politics of the Towns Fund. The Political Quarterly.

RAD, S. R., & ROY, O. (2020). Deliberation, Single-Peakedness, and Coherent Aggregation. American Political Science Review, 1-20.

Migchelbrink, K., & Van de Walle, S. (2021). A systematic review of the literature on determinants of public managers’ attitudes toward public participation. Local Government Studies, 1-22.

Armand, A., Coutts, A., Vicente, P. C., & Vilela, I. (2020). Does information break the political resource curse? Experimental evidence from Mozambique. American Economic Review, 110(11), 3431-53.

Giraudet, L. G., Apouey, B., Arab, H., Baeckelandt, S., Begout, P., Berghmans, N., … & Tournus, S. (2021). Deliberating on Climate Action: Insights from the French Citizens’ Convention for Climate (No. hal-03119539).

Rivera-Burgos, V. (2020). Are Minorities Underrepresented in Government Policy? Racial Disparities in Responsiveness at the Congressional District Level.

Erlich, A., Berliner, D., Palmer-Rubin, B., & Bagozzi, B. E. (2021). Media Attention and Bureaucratic Responsiveness. Journal of Public Administration Research and Theory.

Eubank, N., & Fresh, A. Enfranchisement and Incarceration After the 1965 Voting Rights Act.

Mueller, S., Gerber, M., & Schaub, H. P. Democracy Beyond Secrecy: Assessing the Promises and Pitfalls of Collective Voting. Swiss Political Science Review.

Campbell, T. (2021). Black Lives Matter’s Effect on Police Lethal Use-of-Force. Available at SSRN.

Wright, N., Nagle, F., & Greenstein, S. M. (2020). Open source software and global entrepreneurship. Harvard Business School Technology & Operations Mgt. Unit Working Paper, (20-139), 20-139.

Boxell, L., & Steinert-Threlkeld, Z. (2021). Taxing dissent: The impact of a social media tax in Uganda. Available at SSRN 3766440.

Miscellaneous radar: 

  • Modern Grantmaking: That’s the title of a new book by Gemma Bull and Tom Steinberg. I had the privilege of reading snippets of this, and I can already recommend it not only to those working with grantmaking, but also pretty much anyone working in the international development space.
  • Lectures: The Center for Collective Learning has a fantastic line-up of lectures open to the public. Find out more here.
  • Learning from Togo: While unemployment benefits websites were crashing in the US, the Togolese government showed how to leverage mobile money and satellite data to effectively get cash into the hands of those who need it the most
  • Nudging the nudgers: British MPs are criticising academics for sending them fictitious emails for research. I wonder if part of their outrage is not just about the emails, but about what the study could reveal in terms of their actual responsiveness to different constituencies.
  • DataViz: Bringing data visualization to physical/offline spaces has been an obsession of mine for quite a while. I was happy to come across this project while doing some research for a presentation

Enjoy the holiday.

46 favorite reads on democracy, civic tech and a few other interesting things

open book lot

I’ve recently been exchanging with some friends on a list of favorite reads from 2020. While I started with a short list, it quickly grew: after all, despite the pandemic, there has been lots of interesting stuff published in the areas that I care about throughout the year. While the final list of reads varies in terms of subjects, breadth, depth and methodological rigor, I picked these 46 for different reasons. These include my personal judgement of their contribution to the field of democracy, or simply a belief that some of these texts deserve more attention than they currently receive. Others are in the list because I find them particularly surprising or amusing.

As the list is long – and probably at this length, unhelpful to my friends – I tried to divide it into three categories: i) participatory and deliberative democracy, ii) civic tech and digital democracy, and iii) and miscellaneous (which is not really a category, let alone a very helpful one, I know). In any case, many of the titles are indicative of what the text is about, which should make it easier to navigate through the list.

These caveats aside, below is the list of some of my favorite books and articles published in 2020:

Participatory and Deliberative Democracy

While I still plan to make a similar list for representative democracy, this section of the list is intentionally focused on democratic innovations, with a certain emphasis on citizens’ assemblies and deliberative modes of democracy. While this reflects my personal interests, it is also in part due to the recent surge of interest in citizens’ assemblies and other modes of deliberative democracy, and the academic production that followed.  

On Civic Tech and Digital Democracy

2020 was the year where the field of civic tech seemed to take a democratic turn, from fixing potholes to fixing democracy.

MISCELANEOUS

Finally, a section as random as 2020.

As mentioned before, the list is already too long. But if there’s anything anyone thinks should absolutely be on this list, please do let me know.

Survey of young adults further exposes the challenges for US democracy. But addressing them could be an opportunity to reimagine democracy.

I just came across the recently published data from the GenForward project in the United States, a nationally representative survey of over 3,000 young adults aged 18-36 conducted by political scientist Cathy Cohen at the University of Chicago. The new data on race, young adults, and the 2020 elections paints a challenging picture of how young adults in the country perceive institutions and democracy. For those who think the recent election outcome put democracy back on track, these results reveal important challenges, but also opportunities.

  1. Negative enthusiasm takes the lead in young voters’ motivations

Prior to the election, in an interview with the Washington Post, Donald Trump  asserted that “Negative enthusiasm doesn’t win races. Positive enthusiasm, meaning ‘they like somebody’ is how elections are won.” But judging from the survey, negative enthusiasm was determinant in young voters’ choices: 64% of respondents said they would vote for Joe Biden precisely because they disliked the other candidate. There are significant differences across the profiles of respondents: for instance, only 28% of white respondents indicated that they would vote for Biden because they were enthusiastic about the candidate, while this number reaches 47% for black respondents.  

While these numbers can be disheartening, one could say they just show democracy at work, with young adults sanctioning the incumbent at the ballots. Add to that the polarized nature of elections, and the results are hardly surprising. But could this also reveal something more worrisome, particularly in the long-term? After all, research shows that voting is a rather habit-forming behavior: a citizen who votes today is more likely to vote in the future, and an 18- year-old who votes for a certain party now is likely to be voting for the same party when he turns 81. Does antagonistic voting behavior follow the same pattern? If it does, what signal does it send to parties? And what does it mean for the future of US democracy if such adversarial behavior crystalizes in the long-term?

  1. Perception of elite capture and de facto disenfranchisement 

Overall, 83% of respondents agreed (strongly or somewhat) with the statement that “the government is run by a few big interests, looking out for themselves and their friends.”

These results may seem surprising, but how do they fit with reality? Let’s take, for instance, the US Congress. While only 3% of the US population is made up of millionaires, in Congress they are a majority. And while workers make up more than half (52%) of the US population, they are only 2% in Congress. Is this exclusive club of Congress an exception in American politics? Unfortunately not. When looking at all levels of US government, politicians from working class backgrounds are less than a tenth of all elected officials. 

Some might argue that these disparities are not necessarily problematic, as elected individuals can act on behalf of broader interests. But as the saying goes, “if you’re not at the table, you’re on the menu.” Similar to most representative systems around the world, US policymaking is systematically biased towards the interests of the wealthier. So, while it may be depressing that 83% of young adults feel the government is run by a few big interests, it is understandable in the face of a governmental model that is, unfortunately, by the rich and for the rich. This discredit of representative institutions is reinforced by another result of the survey: 75% of young Americans agree (strongly or somewhat) with the statement “The leaders in government care very little about people like me”, revealing a sense of political alienation.  

4) Between discredit and revolution 

Young adults were also asked about the most effective way to drive real change in the country. And their responses tell us a lot about how they currently perceive traditional democratic institutions and their capacity to address collective issues. First, only 16% of respondents answered that real change can be achieved by voting in national elections. In other words, the overwhelming majority of young adults in the US reject the notion of voting in presidential elections as the ultimate democratic practice in the country. 

Second, 22% of respondents find that voting in state and local elections is the most effective way to bring about change. One can only speculate on the reasons for this, but here are a few potential explanations that come to mind. In socioeconomic terms, sub-national institutions are slightly more representative than national ones. This, at least hypothetically, should make these institutions marginally more responsive to larger constituencies. Also, given that most of the participatory institutions that allow citizens to impact decision-making in the US are at the sub-national level (e.g. referendums, initiatives), citizens may perceive state level institutions as being more responsive. Finally, the recent protagonism of some state  governments in the response to the Covid-19 crisis might also play a part in these views. 

Third,  38% of responses on the most effective ways to create real change in the US mention unconventional (non-electoral) forms of public participation, including categories such as protests, boycotts and social media campaigns. This is the same proportion as answers mentioning voting in elections, presidential and subnational, combined. Most strikingly, the third-most selected means to bring about change is “revolution” (14%). While the term revolution is not clearly defined here, this result certainly shows an eagerness for structural change in the way American democracy works, rather than milder reforms that are unlikely to alter the status quo. If we add revolution to the list of non-electoral forms of participation, these represent a total of 52% of survey responses. 

In short, the majority of young Americans between 18 and 36 years old, a sizable part of the electorate, finds that the best way to effect real change in the US lies outside typical democratic institutions. Even the much celebrated “return to [pre-2016] normalcy” following the recent election result is unlikely to reverse this picture on its own. After all, it was this very political normalcy of recent decades, characterized by inequality and poor responsiveness, that led to the situation that now affects US democracy. 

Not indifferent to the fact that a return to the pre-2016 era is unlikely to be sustainable, there are now a number of proposals on the table for how American democracy could be strengthened. These include, for instance, the Protecting our Democracy Act, the six strategies put forward by the Commission on the Practice of Democratic Citizenship, and the implementation of proportional voting, most effectively defended in Lee Drutman’s recent book Breaking the Two-Party Doom Loop.

While most of the proposed reforms are well-intended and likely to produce positive results, they are unlikely to address the fundamental issue of unequal responsiveness that affects liberal democracies nowadays. Furthermore, given the context of polarization and distrust, any democratic reforms undertaken by political elites alone are bound to have their legitimacy questioned by a large part of the population. What the numbers of the GenSurvey reveal, above all, is a sense of disenfranchisement and a belief that public decisions are taken by “few big interests, looking out for themselves and their friends.” 

Citizens will be wary of any attempts to change the rules of the game, but particularly if these changes are defined by those who benefit the most from the current rules. Thus, efforts to rebuild the foundations of modern democracy, be it in the US or elsewhere, are unlikely to be sustainable if citizens are not effectively included in the process. In that case, why not constitute a large citizens’ assembly on democratic reform, to be subsequently validated through the popular vote? Or, as suggested by Archon Fung, why not empower ordinary citizens to make recommendations to Congress and the administration on how to address democratic issues?

The modalities for citizen involvement in this process are multiple. And while some models may be more feasible than others, one thing is certain: tokenistic approaches to citizen participation in democratic reforms are equally doomed to fail. Addressing the challenges highlighted by this survey will require more than politics as usual. But this can also be an opportunity for Americans to collectively reimagine the democracy they want. 

****

2020 and beyond: 11 predictions at the intersection of technology and citizen engagement

pic by @jmuniz on Unsplah

The rapid evolution of digital technologies has been changing relationships between governments and citizens around the world.  These shifts make it the right time to pose the key question a new World Bank publication explores: 

Will digital technologies, both those that are already widespread and those that are still emerging, have substantial impacts on the way citizens engage and the ways in which power is sought, used, or contested?

The report, Emerging Digital Technologies and Citizen Participation, benefits from the insights of 30 leading scholars and practitioners, and explores what technology might mean for citizen engagement and politics in the coming years. 

The report argues that, regardless of lower technology penetration levels, and given more malleable governance contexts, developing countries may be more influenced by the effects of emerging technologies than older states with greater rigidity and legacy technologies. Digitally influenced citizen engagement is potentially a “leapfrog” area in which developing nations may exploit emerging technologies before the wealthier parts of the world.  

But countries can leapfrog to worse futures, not only better ones. The report also conveys concerns about the negative effects digital technologies can have on the governance of nations. Yet, despite emerging challenges, it contends that new and better citizen engagement approaches are possible. 

What is missing from public discourse is a discussion of the wide range of options that citizens and decision-makers can call upon to enhance their interactions and manage risks. To consider these options, the report makes 11 predictions regarding the effects of technology on citizen engagement in the coming years, and their policy implications. It also offers six measures that would be prudent for governments to take to mitigate risks and leverage opportunities that technological development brings about. 

None of the positive scenarios predicted will emerge without deliberate and intentional actions to support them. And the extent to which they can be shaped to further societal goals will depend on constructive dialogue between governments and citizens themselves. Ultimately, this new publication aims to contribute to this dialogue, so that both developing and developed countries are more likely to leap into better futures. 

*************

Text co-authored with Tom Steinberg, originally cross-posted from the World Bank’s Governance for Development blog. You can also read another article about this report in Apolitical here. While I’m at it: if you work in public service and care about making government work better, I highly recommend Apolitical, a peer-to-peer learning platform for government, sharing smart ideas in policy globally. Join for free here .

Catching up: civic tech research, crisis of participation in Brazil, podcasts and more

34603261201_45067de265

picture by tollwerk on flickr

The dream consultancy

The Hewlett Foundation is seeking consultants to help design a potential, longer-term research collaborative to study the application of behavioral insights to nudge governments to respond to citizen feedback. This is just fantastic and deserves a blog post of its own. Hopefully I will be able to do that before the EOI period ends.

Rise and fall of participatory democracy in Brazil?

In an excellent article for Open Democracy, Thamy Pogrebinschi and Talita Tanscheit ask what happened to citizen participation in Brazil. The authors note that “The two main pillars on which institutional innovations in Brazil had been erected – extensive institutionalization and a strong civil society – have not been enough to prevent a functioning system of social participation being torn to shreds in little more than a year.”

I have been asked for my take on the issue more than once. Personally, I am not surprised, despite all the institutionalization and the strength of civil society. Given the current Brazilian context, I would be surprised if the participatory spaces the article examines (councils and conferences) remained unaffected.

Playing the devil’s advocate, this period of crisis may also be an opportunity to reflect on how policy councils and conferences could innovate themselves. While they extremely important, one hypothesis is that these structures failed to appropriately channel societal concerns and demands that later exploded into a political crisis, leading to the current situation.

Provocations aside, it is just too early to tell whether this is the definitive death of conferences and councils. And my sense is that their future will be contingent upon two key points: i) the direction that Brazilian politics take following the 2018 general election (e.g. progressive x populist/authoritarian), ii) the extent to which councils and conferences can adapt to the growing disintermediation in activism that we observe today.

The Business Model of Civic Tech?

If you are working in the civic tech space, you probably came across a new report commissioned by the Knight Foundation and Rita Allen Foundation, “Scaling Civic Tech: Paths to a Sustainable Future.” As highlighted by Christopher Wilson at the Methodical Snark, while not much in the report is surprising for civic technologists, it does provide the reader with a good understanding of the expectations of funders on the issue of financial sustainability.

When thinking about business models of civic tech efforts, I wonder how much money and energy were devoted to having governments open up their datasets while neglecting the issue of how these governments procure technology. If 10% of those efforts had been dedicated to reforming the way governments procure technology, many of those in the civic tech space would now be less dependent on foundations’ grants (or insights on business models).

Having said this, I am a bit bothered by the debate of business models when it comes to democratic goods. After all, what would happen to elections if they depended on business models (or multiple rounds of foundations’ grants)?

Walking the talk: participatory grant making?

A new report commissioned by the Ford Foundation examines whether the time has come for participatory grant making. The report, authored by Cynthia Gibson, explores the potential use of participatory approaches by foundations, and offers a “starter” framework to inform the dialogue on the subject.

Well-informed by the literature on participatory and deliberative democracy, the report also touches upon the key question of whether philanthropic institutions, given their tax benefits, owe the public a voice in decisions they make. If you are not convinced, this Econtalk podcast with Bob Reich (Stanford) on foundations and philanthropy is rather instructive. There is also a great anecdote in the podcast that illustrates the point for public voice, as described by Reich:

“So, in the final days of creating the Open Society Institute and associated foundations, there was disagreement amongst the staff that Soros had hired about exactly what their program areas, or areas of focus would be. And, to resolve a disagreement, Soros allegedly slammed his fist on the table and said, ‘Well, at the end of the day, it’s my money. We’re going to do it my way.’ And a program officer that he’d hired said, ‘Well, actually Mr. Soros, about 30% or 40% of it would have been the taxpayer’s money. So, I think some other people actually have a say in what you do, here, too.’ And he was fired the next week.”

Democracy podcasts real-democracy-now-logo-jpg

Talking about podcasts, the Real Democracy Now Podcast is fantastic. It is definitely one of the best things out there for practitioners and scholars working with citizen engagement.

Although broader in terms of the subjects covered, Talking Politics by David Runciman and Catherine Carr is another great option.

Other tips are more than welcome!

And this is brilliant…

(via @oso)

Other interesting stuff you may have missed

Study analyzing Pew survey data suggests a “gateway effect” where slacktivism by the politically uninterested may lead to greater political activity offline

Seeing the World Through the Other’s Eye: An Online Intervention Reducing Ethnic Prejudice

Smartphone monitoring streamlined information flows and improved inspection rates at public clinics across Punjab (ht @coscrovedent)

The Unintended Effects of Bottom-Up Accountability: Evidence from a Field Experiment in Peru

Literature review: does public reporting in the health sector influence quality, patient and provider’s perspective?

Catching up (again!) on DemocracySpot

cover-bookIt’s been a while since the last post here. In compensation, it’s not been a bad year in terms of getting some research out there. First, we finally managed to publish “Civic Tech in the Global South: Assessing Technology for the Public Good.” With a foreword by Beth Noveck, the book is edited by Micah Sifry and myself, with contributions by Evangelia Berdou, Martin Belcher, Jonathan Fox, Matt Haikin, Claudia Lopes, Jonathan Mellon and Fredrik Sjoberg.

The book is comprised of one study and three field evaluations of civic tech initiatives in developing countries. The study reviews evidence on the use of twenty-three information and communication technology (ICT) platforms designed to amplify citizen voices to improve service delivery. Focusing on empirical studies of initiatives in the global south, the authors highlight both citizen uptake (yelp) and the degree to which public service providers respond to expressions of citizen voice (teeth). The first evaluation looks at U-Report in Uganda, a mobile platform that runs weekly large-scale polls with young Ugandans on a number of issues, ranging from access to education to early childhood development. The following evaluation takes a closer look at MajiVoice, an initiative that allows Kenyan citizens to report, through multiple channels, complaints with regard to water services. The third evaluation examines the case of Rio Grande do Sul’s participatory budgeting – the world’s largest participatory budgeting system – which allows citizens to participate either online or offline in defining the state’s yearly spending priorities. While the comparative study has a clear focus on the dimension of government responsiveness, the evaluations examine civic technology initiatives using five distinct dimensions, or lenses. The choice of these lenses is the result of an effort bringing together researchers and practitioners to develop an evaluation framework suitable to civic technology initiatives.

The book was a joint publication by The World Bank and Personal Democracy Press. You can download the book for free here.

Women create fewer online petitions than men — but they’re more successful

clinton

Another recent publication was a collaboration between Hollie R. Gilman, Jonathan Mellon, Fredrik Sjoberg and myself. By examining a dataset covering Change.org online petitions from 132 countries, we assess whether online petitions may help close the gap in participation and representation between women and men. Tony Saich, director of Harvard’s Ash Center for Democratic Innovation (publisher of the study), puts our research into context nicely:

The growing access to digital technologies has been considered by democratic scholars and practitioners as a unique opportunity to promote participatory governance. Yet, if the last two decades is the period in which connectivity has increased exponentially, it is also the moment in recent history that democratic growth has stalled and civic spaces have shrunk. While the full potential of “civic technologies” remains largely unfulfilled, understanding the extent to which they may further democratic goals is more pressing than ever. This is precisely the task undertaken in this original and methodologically innovative research. The authors examine online petitions which, albeit understudied, are one of the fastest growing types of political participation across the globe. Drawing from an impressive dataset of 3.9 million signers of online petitions from 132 countries, the authors assess the extent to which online participation replicates or changes the gaps commonly found in offline participation, not only with regards to who participates (and how), but also with regards to which petitions are more likely to be successful. The findings, at times counter-intuitive, provide several insights for democracy scholars and practitioners alike. The authors hope this research will contribute to the larger conversation on the need of citizen participation beyond electoral cycles, and the role that technology can play in addressing both new and persisting challenges to democratic inclusiveness.

But what do we find? Among other interesting things, we find that while women create fewer online petitions than men, they’re more successful at it! This article in the Washington Post summarizes some of our findings, and you can download the full study here.

Other studies that were recently published include:

The Effect of Bureaucratic Responsiveness on Citizen Participation (Public Administration Review)

Abstract:

What effect does bureaucratic responsiveness have on citizen participation? Since the 1940s, attitudinal measures of perceived efficacy have been used to explain participation. The authors develop a “calculus of participation” that incorporates objective efficacy—the extent to which an individual’s participation actually has an impact—and test the model against behavioral data from the online application Fix My Street (n = 399,364). A successful first experience using Fix My Street is associated with a 57 percent increase in the probability of an individual submitting a second report, and the experience of bureaucratic responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of responsiveness for fostering an active citizenry while demonstrating the value of incidentally collected data to examine participatory behavior at the individual level.

Does online voting change the outcome? Evidence from a multi-mode public policy referendum (Electoral Studies)

Abstract:

Do online and offline voters differ in terms of policy preferences? The growth of Internet voting in recent years has opened up new channels of participation. Whether or not political outcomes change as a consequence of new modes of voting is an open question. Here we analyze all the votes cast both offline (n = 5.7 million) and online (n = 1.3 million) and compare the actual vote choices in a public policy referendum, the world’s largest participatory budgeting process, in Rio Grande do Sul in June 2014. In addition to examining aggregate outcomes, we also conducted two surveys to better understand the demographic profiles of who chooses to vote online and offline. We find that policy preferences of online and offline voters are no different, even though our data suggest important demographic differences between offline and online voters.

We still plan to publish a few more studies this year, one looking at digitally-enabled get-out-the-vote (GOTV) efforts, and two others examining the effects of participatory governance on citizens’ willingness to pay taxes (including a fun experiment in 50 countries across all continents).

In the meantime, if you are interested in a quick summary of some of our recent research findings, this 30 minutes video of my keynote at the last TicTEC Conference in Florence should be helpful.

 

 

New Papers Published: FixMyStreet and the World’s Largest Participatory Budgeting

2016_7_5_anderson-lopes_consulta-popular_virtual

Voting in Rio Grande do Sul’s Participatory Budgeting  (picture by Anderson Lopes)

Here are two new published papers that my colleagues Jon Mellon, Fredrik Sjoberg and myself have been working on.

The first, The Effect of Bureaucratic Responsiveness on Citizen Participation, published in Public Administration Review, is – to our knowledge – the first study to quantitatively assess at the individual level the often-assumed effect of government responsiveness on citizen engagement. It also describes an example of how the data provided through digital platforms may be leveraged to better understand participatory behavior. This is the fruit of a research collaboration with MySociety, to whom we are extremely thankful.

Below is the abstract:

What effect does bureaucratic responsiveness have on citizen participation? Since the 1940s, attitudinal measures of perceived efficacy have been used to explain participation. The authors develop a “calculus of participation” that incorporates objective efficacy—the extent to which an individual’s participation actually has an impact—and test the model against behavioral data from the online application Fix My Street (n = 399,364). A successful first experience using Fix My Street is associated with a 57 percent increase in the probability of an individual submitting a second report, and the experience of bureaucratic responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of responsiveness for fostering an active citizenry while demonstrating the value of incidentally collected data to examine participatory behavior at the individual level.

An earlier, ungated version of the paper can be found here.

The second paper, Does Online Voting Change the Outcome? Evidence from a Multi-mode Public Policy Referendum, has just been published in Electoral Studies. In an earlier JITP paper (ungated here) looking at Rio Grande do Sul State’s Participatory Budgeting – the world’s largest – we show that, when compared to offline voting, online voting tends to attract participants who are younger, male, of higher income and educational attainment, and more frequent social media users. Yet, one question remained: does the inclusion of new participants in the process with a different profile change the outcomes of the process (i.e. which projects are selected)? Below is the abstract of the paper.

Do online and offline voters differ in terms of policy preferences? The growth of Internet voting in recent years has opened up new channels of participation. Whether or not political outcomes change as a consequence of new modes of voting is an open question. Here we analyze all the votes cast both offline (n = 5.7 million) and online (n = 1.3 million) and compare the actual vote choices in a public policy referendum, the world’s largest participatory budgeting process, in Rio Grande do Sul in June 2014. In addition to examining aggregate outcomes, we also conducted two surveys to better understand the demographic profiles of who chooses to vote online and offline. We find that policy preferences of online and offline voters are no different, even though our data suggest important demographic differences between offline and online voters.

The extent to which these findings are transferable to other PB processes that combine online and offline voting remains an empirical question. In the meantime, nonetheless, these findings suggest a more nuanced view of the potential effects of digital channels as a supplementary means of engagement in participatory processes. I hope to share an ungated version of the paper in the coming days.