Praising and Shaming in Civic Tech (or Reversed Nudging for Government Responsiveness) 

The other day during a talk with researcher Tanya Lokot I heard an interesting story from Russia. Disgusted with the state of their streets, activists started painting caricatures of government officials over potholes.


In the case of a central street in Saratov, the immediate response to one of these graffiti was this:  


Later on, following increased media attention – and some unexpected turnarounds – the pothole got fixed.

That reminded me of a recurrent theme in some conversations I have, which refers to whether praising and shaming matters to civic tech and, if so, to which extent. To stay with two classic examples, think of solutions such as FixMyStreet and SeeClickFix, through which citizens publically report problems to the authorities.

Considering government takes action, what prompts them to do so? At a very basic level, three hypothesis are possible:

1) Governments take action based on their access to distributed information about problems (which they supposedly are not aware of)

2) Governments take action due to the “naming and shaming” effect, avoiding to be publically perceived as unresponsive (and seeking praise for its actions)

3) Governments take action for both of the reasons above

Some could argue that hypothesis 3 is the most likely to be true, with some governments leaning more towards one reason to respond than others. Yet, the problem is that we know very little about these hypotheses, if anything. In other words – to my knowledge – we do not know whether making reports through these platforms public makes any difference whatsoever when it comes to governments’ responsiveness. Some might consider this as a useless academic exercise: as long as these tools work, who cares? But I would argue that the answer that questions matters a lot when it comes to the design of similar civic tech initiatives that aim to prompt government to action.


Let’s suppose that we find that all else equal governments are significantly more responsive to citizen reports when these are publically displayed. This would have importance both in terms of process and technological design. In terms of process, for instance, civic tech initiatives would probably be more successful if devoting part of their resources to amplify the visibility of government action and inaction (e.g. through local media). Conversely, from a technological standpoint, designers should devote substantive more effort on interfaces that maximizes praising and shaming of governments based on their performance (e.g. rankings, highlighting pending reports). Conversely, we might find that publicizing reports have very little effect in terms of responsiveness. In that case, more work would be needed to figure out which other factors – beyond will and capacity – play a role in government responsiveness (e.g. quality of reports).   

Most likely, praising and shaming would depend on a number of factors such as political competition, bureaucratic autonomy, and internal performance routines. But a finer understanding of that would not only bear an impact on the civic tech field, but across the whole accountability landscape. To date, we know very little about it. Yet, one of the untapped potential of civic technology is precisely that of conducting experiments at lowered costs. For instance, conducting randomized controlled trials on the effects on the publicization of government responsiveness should not be so complicated (e.g effects of rankings, amplifying visibility of unfixed problems). Add to that analysis of existing systems’ data from civic tech platforms, and some good qualitative work, and we might get a lot closer at figuring out what makes politicians and civil servants’ “tick”.

Until now, behavioral economics in public policy has been mainly about nudging citizens toward preferred choices. Yet it may be time to start also working in the opposite direction, nudging governments to be more responsive to citizens. Understanding whether praising and shaming works (and if so, how and to what extent) would be an important step in that direction.


Also re-posted on Civicist.

Three New Papers (and a presentation) on Civic Tech


This blog has been slow lately, but as I mentioned before, it is for a good cause. With some great colleagues I’ve been working on a series of papers (and a book) on civic technology. The first three of these papers are out. There is much more to come, but in the meantime, you can find below the abstracts and link to each of the papers. I also add the link to a presentation which highlights some other issues that we are looking at.

  • Effects of the Internet on Participation: Study of a Public Policy Referendum in Brazil.

Does online voting mobilize citizens who otherwise would not participate? During the annual participatory budgeting vote in the southern state of Rio Grande do Sul in Brazil – the world’s largest – Internet voters were asked whether they would have participated had there not been an online voting option (i-voting). The study documents an 8.2 percent increase in total turnout with the introduction of i-voting. In support of the mobilization hypothesis, unique survey data show that i-voting is mainly used by new participants rather than just for convenience by those who were already mobilized. The study also finds that age, gender, income, education, and social media usage are significant predictors of being online-only voters. Technology appears more likely to engage people who are younger, male, of higher income and educational attainment, and more frequent social media users.

Read more here.

  • The Effect of Government Responsiveness on Future Political Participation.

What effect does government responsiveness have on political participation? Since the 1940s political scientists have used attitudinal measures of perceived efficacy to explain participation. More recent work has focused on underlying genetic factors that condition citizen engagement. We develop a ‘Calculus of Participation’ that incorporates objective efficacy – the extent to which an individual’s participation actually has an impact – and test the model against behavioral data from (n=399,364). We find that a successful first experience using (e.g. reporting a pothole and having it fixed) is associated with a 54 percent increase in the probability of an individual submitting a second report. We also show that the experience of government responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of government responsiveness for fostering an active citizenry, while demonstrating the value of incidentally collected data to examine participatory behavior at the individual level.

Read more here.

  • Do Mobile Phone Surveys Work in Poor Countries? 

In this project, we analyzed whether mobile phone-based surveys are a feasible and cost-effective approach for gathering statistically representative information in four low-income countries (Afghanistan, Ethiopia, Mozambique, and Zimbabwe). Specifically, we focused on three primary research questions. First, can the mobile phone survey platform reach a nationally representative sample? Second, to what extent does linguistic fractionalization affect the ability to produce a representative sample? Third, how effectively does monetary compensation impact survey completion patterns? We find that samples from countries with higher mobile penetration rates more closely resembled the actual population. After weighting on demographic variables, sample imprecision was a challenge in the two lower feasibility countries (Ethiopia and Mozambique) with a sampling error of /- 5 to 7 percent, while Zimbabwe’s estimates were more precise (sampling error of /- 2.8 percent). Surveys performed reasonably well in reaching poor demographics, especially in Afghanistan and Zimbabwe. Rural women were consistently under-represented in the country samples, especially in Afghanistan and Ethiopia. Countries’ linguistic fractionalization may influence the ability to obtain nationally representative samples, although a material effect was difficult to discern through penetration rates and market composition. Although the experimentation design of the incentive compensation plan was compromised in Ethiopia and Zimbabwe, it seems that offering compensation for survey completion mitigated attrition rates in several of the pilot countries while not reducing overall costs. These effects varied across countries and cultural settings.

Read more here.

  • The haves and the have nots: is civic tech impacting the people who need it most? (presentation) 

Read more here.

Citizen Engagement: Two Learning Opportunities

For those willing to learn more about citizen engagement, here are two opportunities worth checking out.

The first one is the World Bank’s MOOC on Citizen Engagement. Even though the course has already started it is still possible to enroll. Here’s a brief description of the course:

The 4-week course brings together a diverse range of experts to provide students with a comprehensive overview of citizen engagement. It begins by synthesizing the theories and concepts that underlie citizen engagement, and goes on to explore how citizens can be engaged in both policymaking and public service delivery. Finally, it investigates how recent innovations are shaking up the field, through detailing both successes and failures of these new approaches. Our presenters, leaders in academia, government, and civil society, provide a wide range of perspectives and real-world experience to give participants a deeper understanding of whether citizen engagement can truly enhance the process of development. Participants will also have the opportunity to collaborate with one another and design their own citizen engagement initiatives, thereby putting theories learned in the course into practice.

This week starts module 2, with Matt Leighninger, Tina Nabatchi, Beth Noveck and myself:

This week explores the role that citizens can play in actively shaping public policy. We start by examining how citizens participate, analyzing the differences between ‘thick’ and ‘thin’ forms of engagement and asking strategic questions such as who should participate, how should participants interact with decision makers, what information do participants need, and how will participation impact policy decisions. Next, we survey examples of crowdsourcing and open innovation that are helping governments and citizens better interact. Finally, we unpack why citizens participate, moving beyond the mere calculation of costs and benefits described in the rational choice model to an analysis of broader factors that influence participation.



The second opportunity is the coaching program on Citizen Engagement by the GovLab Academy, with Beth Noveck and myself. Here’s the description of the program:

This program is designed to help those wishing to integrate citizen engagement into ongoing projects. Whether policymaking or service delivery in nature, we start from the assumption that, engaging citizens is both more effective and more legitimate as a way of working. Engagement may be offline as well as on and local or widely distributed. But, in every case, teams should have a clear sense of the problem they are trying to solve, the rationale for why they believe greater openness to and collaboration with citizens can have a positive impact, and a willingness to measure impact. Convened by two practitioners/theorists of citizen engagement, the program with emphasize peer-to-peer coaching and introductions to relevant mentors and experts from around the world working on related problems or applying similar methods. Our goal? To have take more citizen engagement projects from idea to implementation. Everyone is invited to apply. There will be an admissions preference for those working at the city-level.

There’s a number of other awesome courses provided by the GovLab: you can check all of them here.

New Evidence that Citizen Engagement Increases Tax Revenues

pic by Tax Credits on flickr

Quite a while ago, drawing mainly from the literature on tax morale, I posted about the evidence on the relationship between citizen engagement and tax revenues, in which participatory processes lead to increased tax compliance (as a side note, I’m still surprised how those working with citizen engagement are unaware of this evidence).

Until very recently this evidence was based on observational studies, both qualitative and quantitative. Now we have – to my knowledge – the first experimental evidence that links citizen participation and tax compliance. A new working paper published by Diether Beuermann and Maria Amelina present the results of a randomized experiment in Russia, described in the abstract below:

This paper provides the first experimental evaluation of the participatory budgeting model showing that it increased public participation in the process of public decision making, increased local tax revenues collection, channeled larger fractions of public budgets to services stated as top priorities by citizens, and increased satisfaction levels with public services. These effects, however, were found only when the model was implemented in already-mature administratively and politically decentralized local governments. The findings highlight the importance of initial conditions with respect to the decentralization context for the success of participatory governance.

In my opinion, this paper is important for a number of reasons, some of which are worth highlighting here. First, it adds substantive support to the evidence on the positive relationship between citizen engagement and tax revenues. Second, in contrast to studies suggesting that participatory innovations are most likely to work when they are “organic”, or “bottom-up”, this paper shows how external actors can induce the implementation of successful participatory experiences. Third, I could not help but notice that two commonplace explanations for the success of citizen engagement initiatives, “strong civil society” and “political will”, do not feature in the study as prominent success factors.  Last, but not least, the paper draws attention to how institutional settings matter (i.e. decentralization). Here, the jack-of-all-trades (yet not very useful) “context matters”, could easily be replaced by “institutions matter”.

You can read the full paper here [PDF].

DemocracySpot’s Most Read Posts in 2014

Glasses for reading (1936) – Nationaal Archief

(I should have posted this on the 31st, but better late than never)

Below are some of the most read posts in 2014. While I’m at it, I’ll take the opportunity to explain the reduced number of posts in the last few months. Since mid-2014 I have been working with a small team of political and data scientists on a number of research questions at the intersection of technology and citizen engagement (I presented a few preliminary findings here). Following the period of field work, data collection and experiments, we have now started the drafting and peer-review stage of our research. This has been an extremely time-consuming process, which has taken up most of my weekends, when I generally write for this blog.

Still, one of my new year’s resolutions is precisely to better discipline myself to post more regularly. And I am hopeful that the publication of our upcoming research will make up for the recent reduction in posts. We will start to disseminate our results soon, so stay tuned.

In the meantime, here’s a selection of the five most read posts in 2014.

The Problem with Theory of Change

Technology and Citizen Engagement: Friend or Foe? 

A Brilliant Story of Participation, Technology and Development Outcomes

When Citizen Engagement Saves Lives (and what we can learn from it) 

Social Accountability: What Does the Evidence Really Say?

13 Citizen Engagement Stories from Around the World

Orçamento Participativo 2015/2016 é aberto na região Leste
The Journal of Field Actions, together with Civicus, has just published a special issue “Stories of Innovative Democracy at the Local Level: Enhancing Participation, Activism and Social Change Across the World.” When put together, the 13 articles provide a lively illustration of the wealth of democratic innovations taking place around the world.

Now the paper: Evidence of Social Accountability Initiatives


A little while ago I wrote about Jonathan Fox’s work on the evidence of social accountability initiatives. Initially in the format of a PDF slide presentation, it has now been turned into a magnificent paper, the first of the GPSA working paper series. Below is the abstract:

Policy discussion of social accountability initiatives has increasingly has increasingly focused on questions about their tangible development impacts. The empirical evidence is mixed. This meta-analysis rethinks some of the most influential evaluations through a new lens: the distinction between tactical and strategic approaches to the promotion of citizen voice to contribute to improved public sector performance. Field experiments tend to study bounded, tactical interventions that rely on optimistic assumptions about the power of information alone both to motivate collective action and to influence public sector performance. More promising results emerge from studies of multi-pronged strategies that encourage enabling environments for collective action and bolster state capacity to actually respond to citizen voice. This reinterpretation of the empirical evidence leads to a proposed new series of grounded propositions that focus on state-society synergy and sandwich strategies through which ‘voice’ and ‘teeth’ can become mutually empowering.

You can download the paper here: Social Accountability: What does the Evidence Really Say [PDF]. You can also read my take on the main lessons from Jonathan’s work here. Enjoy the reading.


PS: I have been away for a while doing field work, but hope to start posting (more or less) regularly soon.