Praising and Shaming in Civic Tech (or Reversed Nudging for Government Responsiveness) 

The other day during a talk with researcher Tanya Lokot I heard an interesting story from Russia. Disgusted with the state of their streets, activists started painting caricatures of government officials over potholes.


In the case of a central street in Saratov, the immediate response to one of these graffiti was this:  


Later on, following increased media attention – and some unexpected turnarounds – the pothole got fixed.

That reminded me of a recurrent theme in some conversations I have, which refers to whether praising and shaming matters to civic tech and, if so, to which extent. To stay with two classic examples, think of solutions such as FixMyStreet and SeeClickFix, through which citizens publically report problems to the authorities.

Considering government takes action, what prompts them to do so? At a very basic level, three hypothesis are possible:

1) Governments take action based on their access to distributed information about problems (which they supposedly are not aware of)

2) Governments take action due to the “naming and shaming” effect, avoiding to be publically perceived as unresponsive (and seeking praise for its actions)

3) Governments take action for both of the reasons above

Some could argue that hypothesis 3 is the most likely to be true, with some governments leaning more towards one reason to respond than others. Yet, the problem is that we know very little about these hypotheses, if anything. In other words – to my knowledge – we do not know whether making reports through these platforms public makes any difference whatsoever when it comes to governments’ responsiveness. Some might consider this as a useless academic exercise: as long as these tools work, who cares? But I would argue that the answer that questions matters a lot when it comes to the design of similar civic tech initiatives that aim to prompt government to action.


Let’s suppose that we find that all else equal governments are significantly more responsive to citizen reports when these are publically displayed. This would have importance both in terms of process and technological design. In terms of process, for instance, civic tech initiatives would probably be more successful if devoting part of their resources to amplify the visibility of government action and inaction (e.g. through local media). Conversely, from a technological standpoint, designers should devote substantive more effort on interfaces that maximizes praising and shaming of governments based on their performance (e.g. rankings, highlighting pending reports). Conversely, we might find that publicizing reports have very little effect in terms of responsiveness. In that case, more work would be needed to figure out which other factors – beyond will and capacity – play a role in government responsiveness (e.g. quality of reports).   

Most likely, praising and shaming would depend on a number of factors such as political competition, bureaucratic autonomy, and internal performance routines. But a finer understanding of that would not only bear an impact on the civic tech field, but across the whole accountability landscape. To date, we know very little about it. Yet, one of the untapped potential of civic technology is precisely that of conducting experiments at lowered costs. For instance, conducting randomized controlled trials on the effects on the publicization of government responsiveness should not be so complicated (e.g effects of rankings, amplifying visibility of unfixed problems). Add to that analysis of existing systems’ data from civic tech platforms, and some good qualitative work, and we might get a lot closer at figuring out what makes politicians and civil servants’ “tick”.

Until now, behavioral economics in public policy has been mainly about nudging citizens toward preferred choices. Yet it may be time to start also working in the opposite direction, nudging governments to be more responsive to citizens. Understanding whether praising and shaming works (and if so, how and to what extent) would be an important step in that direction.


Also re-posted on Civicist.

New Book on 25 Years of Participatory Budgeting

Screenshot 2014-06-09 17.17.40

A little while ago I mentioned the launch of the Portuguese version of the book organized by Nelson Dias, “Hope for Democracy: 25 Years of Participatory Budgeting Worldwide”.

The good news is that the English version is finally out. Here’s an excerpt from the introduction:

This book represents the effort  of more than forty authors and many other direct and indirect contributions that spread across different continents seek to provide an overview on the Participatory Budgeting (PB) in the World. They do so from different backgrounds. Some are researchers, others are consultants, and others are activists connected to several groups and social movements. The texts reflect this diversity of approaches and perspectives well, and we do not try to influence that.


The pages that follow are an invitation to a fascinating journey on the path of democratic innovation in very diverse cultural, political, social and administrative settings. From North America to Asia, Oceania to Europe, from Latin America to Africa, the reader will find many reasons to closely follow the proposals of the different authors.

The book  can be downloaded here [PDF]. I had the pleasure of being one of the book’s contributors, co-authoring an article with Rafael Sampaio on the use of ICT in PB processes: “Electronic Participatory Budgeting: False Dilemmas and True Complexities” [PDF].

While my perception may be biased, I believe this book will be a major contribution for researchers and practitioners in the field of participatory budgeting and citizen engagement in general. Congratulations to Nelson Dias and all the others who contributed their time and energy.

Social Accountability: What Does the Evidence Really Say?

So what does the evidence about citizen engagement say? Particularly in the development world it is common to say that the evidence is “mixed”. It is the type of answer that, even if correct in extremely general terms, does not really help those who are actually designing and implementing citizen engagement reforms.

This is why a new (GPSA-funded) work by Jonathan Fox, “Social Accountability: What does the Evidence Really Say” is a welcome contribution for those working with open government in general and citizen engagement in particular. Rather than a paper, this work is intended as a presentation that summarizes (and disentangles) some of the issues related to citizen engagement.

Before briefly discussing it, some definitional clarification. I am equating “social accountability” with the idea of citizen engagement given Jonathan’s very definition of  social accountability:

“Social accountability strategies try to improve public sector performance by bolstering both citizen engagement and government responsiveness”

In short, according to this definition, social accountability is defined, broadly, as “citizen participation” followed by government responsiveness, which encompasses practices as distinct as FOI law campaigns, participatory budgeting and referenda.

But what is new about Jonathan’s work? A lot, but here are three points that I find particularly important, based on a very personal interpretation of his work.

First, Jonathan makes an important distinction between what he defines as “tactical” and “strategic” social accountability interventions. The first type of interventions, which could also be called “naïve” interventions, are for instance those bounded in their approach (one tool-based) and those that assume that mere access to information (or data) is enough. Conversely, strategic approaches aim to deploy multiple tools and articulate society-side efforts with governmental reforms that promote responsiveness.

This distinction is important because, when examining the impact evaluation evidence, one finds that while the evidence is indeed mixed for tactical approaches, it is much more promising for strategic approaches. A blunt lesson to take from this is that when looking at the evidence, one should avoid comparing lousy initiatives with more substantive reform processes. Otherwise, it is no wonder that “the evidence is mixed.”

Second, this work makes an important re-reading of some of the literature that has found “mixed effects”, reminding us that when it comes to citizen engagement, the devil is in the details. For instance, in a number of studies that seem to say that participation does not work, when you look closer you will not be surprised that they do not work. And many times the problem is precisely the fact that there is no participation whatsoever. False negatives, as eloquently put by Jonathan.

Third, Jonathan highlights the need to bring together the “demand” (society) and “supply” (government) sides of governance. Many accountability interventions seem to assume that it is enough to work on one side or the other, and that an invisible hand will bring them together. Unfortunately, when it comes to social accountability it seems that some degree of “interventionism” is necessary in order to bridge that gap.

Of course, there is much more in Jonathan’s work than that, and it is a must read for those interested in the subject. You can download it here [PDF].

When Citizen Engagement Saves Lives (and what we can learn from it)

When it comes to the relationship between participatory institutions and development outcomes, participatory budgeting stands out as one of the best examples out there. For instance, in a paper recently published in World Development,  Sonia Gonçalves finds that municipalities that adopted participatory budgeting in Brazil “favoured an allocation of public expenditures that closely matched the popular preferences and channeled a larger fraction of their total budget to key investments in sanitation and health services.”  As a consequence, the author also finds that this change in the allocation of public expenditures “is associated with a pronounced reduction in the infant mortality rates for municipalities which adopted participatory budgeting.”

Evolution of Expenditure Share in Health and Sanitation compared between adopters and non-adopters of PB (Goncalves 2013).

Evolution of  the share of expenditures in health and sanitation compared between adopters and non-adopters of participatory budgeting (Goncalves 2013).

Now, in an excellent new article published in Comparative Political Studies, the authors Michael Touchton and Brian Wampler come up with similar findings (abstract):

We evaluate the role of a new type of democratic institution, participatory budgeting (PB), for improving citizens’ well-being. Participatory institutions are said to enhance governance, citizens’ empowerment, and the quality of democracy, creating a virtuous cycle to improve the poor’s well-being. Drawing from an original database of Brazil’s largest cities over the last 20 years, we assess whether adopting PB programs influences several indicators of well-being inputs, processes, and outcomes. We find PB programs are strongly associated with increases in health care spending, increases in civil society organizations, and decreases in infant mortality rates. This connection strengthens dramatically as PB programs remain in place over longer time frames. Furthermore, PB’s connection to well-being strengthens in the hand of mayors from the nationally powerful, ideologically and electorally motivated Workers’ Party. Our argument directly addresses debates on democracy and well-being and has powerful implications for participation, governance, and economic development.

When put together, these findings provide compelling evidence for those who – often unfamiliar with the literature – question the effectiveness of participatory governance institutions. Surely, more research is needed, and different citizen engagement initiatives (and contexts) may lead to different results.

But these articles also bring another important takeaway for those working with development and public sector reform. And that is the need to consider the fact that participatory institutions (as most institutional reforms) may take time to produce desirable/noticeable effects. As noted by Touchton and Wampler:

 The relationships we describe between PB and health and sanitation spending, PB and CSOs, and PB and health care outcomes in this section are greater in magnitude and stronger in statistical significance for municipalities that have used PB for a longer period of time. Municipalities using PB for less than 4 years do exhibit lower infant mortality rates than municipalities that never adopted PB. However, there is no statistically significant difference in spending on health care and sanitation between municipalities using PB for less than 4 years and municipalities that never adopted the program. This demonstrates the benefits from adopting PB are not related to low-hanging fruit, but built over a great number of years. Our results imply PB is associated with long-term institutional and political change—not just short-term shifts in funding priorities .

If throughout the years participatory budgeting has produced  evidence of its effectiveness on a number of fronts (e.g. pro-poor spending), it is only 25 years after its first implementation in Brazil that we start to see systematic evidence of sound development outcomes such as reduction in infant mortality. In other words, rushing to draw conclusions at early stages of participatory governance interventions may result in misleading assessments. Even worse, it may lead to discontinuing efforts that are yet to bear fruit in the medium and longer terms.

10 Most Read Posts in 2013

Below is a selection of the 10 most read posts at DemocracySpot in 2013. Thanks to all of those who stopped by throughout the year, and happy 2014.

1. Does transparency lead to trust? Some evidence on the subject.

2. The Foundations of Motivation for Citizen Engagement

3. Open Government, Feedback Loops, and Semantic Extravaganza

4. Open Government and Democracy

5. What’s Wrong with e-Petitions and How to Fix them

6. Lawrence Lessig on Sortition and Citizen Participation

7. Unequal Participation: Open Government’s Unresolved Dilemma

8. The Effect of SMS on Participation: Evidence from Uganda

9. The Uncertain Relationship Between Open Data and Accountability

10. Lisbon Revisited: Notes on Participation

Open Budgets in Africa: Tokenistic?

Matt Andrews recently posted an interesting analysis in his blog. Measuring the difference in transparency between budget formulation and budget execution, Matt finds that “Most countries have a gap between the scores they get in transparency of budget preparation and transparency of budget execution. Indeed, 63% of the countries have more transparency in budget formulation than in budget execution.” And he concludes that “countries with higher OBI scores tend to have relatively bigger gaps than the others—so that I am led to believe that countries generally focus on improving transparency in formulation to get better scores (with efforts to make execution getting less attention).” He has also written a second post about it and the IBP folks have replied to him here.


Also read

Open Government and Democracy 

The Uncertain Relationship Between Open Data and Accountability