Could Corruption Be Good For Your Health? (or Side Effects of Anti-Corruption Efforts)

The literature on corruption is disputed territory, and one that is full of surprises. On one side, a number of scholars and development practitioners follow the traditional understanding, arguing that corruption is an evil to be eradicated at any cost. On the other, some scholars and practitioners see corruption as an “informal tax” that mediates access to goods and services in contexts of poor institutions and policies, commonly found in the early stages of development. In other words, corruption is a symptom, rather than a problem, with some even arguing that corruption may generate efficiencies in certain contexts: the so-called “greasing-the-wheel hypothesis.”

If these differences in perspective were not enough, a new paper adds more nuance to the debate and challenges conventional wisdom. Launched in 2003, the Brazilian anti-corruption program consists of a series of random audits by the federal government to assess whether municipalities effectively spend earmarked federal transfers according to pre-established guidelines. The results of the audits are then disseminated to the public, with auditors engaging with local councils and civil society to encourage them to monitor tax revenues. The program became famous in development and anti-corruption circles, in great part thanks to an earlier paper by Ferraz and Finan (2008) which found that “the release of the audit outcomes had a significant impact on incumbents’ electoral performance, and that these effects were more pronounced in municipalities where local radio was present to divulge the information.”

But if, when they know about it, citizens are more likely to vote corrupt politicians out of office, what is the effect of these audits on the quality of service delivery? This is the question that Guilherme Lichand, Marcos Lopes and Marcelo Medeiros (2016) try to answer in a new paper entitled “Is Corruption Good For Your Health?.”  Below is the abstract of the paper, (highlights are mine):

While corruption crackdowns have been shown to effectively reduce missing government expenditures, their effects on public service delivery have not been credibly documented. This matters because, if corruption generates incentives for bureaucrats to deliver those services, then deterring it might actually hurt downstream outcomes. This paper exploits variation from an anti-corruption program in Brazil, designed by the federal government to enforce guidelines on earmarked transfers to municipalities, to study this question. Combining random audits with a differences-in-differences strategy, we find that the anti-corruption program greatly reduced occurrences of over-invoicing and off-the-record payments, and of procurement manipulation within health transfers. However, health indicators, such as hospital beds and immunization coverage, became worse as a result. Evidence from audited amounts suggests that lower corruption came at a high cost: after the program, public spending fell by so much that corruption per dollar spent actually increased. These findings are consistent with those responsible for procurement dramatically reducing purchases after the program, either because they no longer can capture rents, or because they are afraid of being punished for procurement mistakes.

The paper’s final discussion is no less provocative. An excerpt below:

(…)  While the Brazilian anti-corruption program represents a major improvement in monitoring and transparency, the focus of administrative penalties and of public opinion on corruption, instead of on the quality of public services, all seem to have thrown the baby out with the bathwater. These findings suggest that policies that expand the scope of desirable outcomes beyond formal procedures, that differentiate between active and passive waste, and that support local procurement staff in complying with complex guidelines might be important steps towards balancing incentives between procuring and making proper use of public funds.

Given that many other governance/accountability interventions traditionally focus on corruption rather than on the performance of services delivered, practitioners should take note of these findings. In the meantime, the debate on corruption and development gets some good extra fuel.

You can download the paper here [PDF].

Additional resources:

Ferraz, C., & Finan, F. (2007). Exposing corrupt politicians: the effects of Brazil’s publicly released audits on electoral outcomes. Quarterly Journal of Economics. (ungated version) 

Avis, E., Ferraz, C., & Finan, F. (2016). Do Government Audits Reduce Corruption? Estimating the Impacts of Exposing Corrupt Politicians (No. w22443). National Bureau of Economic Research.

Dreher, A., & Gassebner, M. (2013). Greasing the wheels? The impact of regulations and corruption on firm entry. Public Choice, 155(3-4), 413-432.

Aidt, T. S. (2009). Corruption, institutions, and economic development.Oxford Review of Economic Policy, 25(2), 271-291.

Méon, P. G., & Weill, L. (2010). Is corruption an efficient grease?. World development, 38(3), 244-259.

Development Drums Podcast: Daniel Kaufmann and Mushtaq Khan debate the role and importance of tackling corruption as part of a development strategy.

New IDS Journal – 9 Papers in Open Government

2016-01-14 16.51.09_resized

The new IDS Bulletin is out. Edited by Rosemary McGee and Duncan Edwards, this is the first open access version of the well-known journal by the Institute of Development Studies. It brings eight new studies looking at a variety of open government issues, ranging from uptake in digital platforms to government responsiveness in civic tech initiatives. Below is a brief presentation of this issue:

Open government and open data are new areas of research, advocacy and activism that have entered the governance field alongside the more established areas of transparency and accountability. In this IDS Bulletin, articles review recent scholarship to pinpoint contributions to more open, transparent, accountable and responsive governance via improved practice, projects and programmes in the context of the ideas, relationships, processes, behaviours, policy frameworks and aid funding practices of the last five years. They also discuss questions and weaknesses that limit the effectiveness and impact of this work, offer a series of definitions to help overcome conceptual ambiguities, and identify hype and euphemism. The contributions – by researchers and practitioners – approach contemporary challenges of achieving transparency, accountability and openness from a wide range of subject positions and professional and disciplinary angles. Together these articles give a sense of what has changed in this fast-moving field, and what has not – this IDS Bulletin is an invitation to all stakeholders to take stock and reflect.

The ambiguity around the ‘open’ in governance today might be helpful in that its very breadth brings in actors who would otherwise be unlikely adherents. But if the fuzzier idea of ‘open government’ or the allure of ‘open data’ displace the task of clear transparency, hard accountability and fairer distribution of power as what this is all about, then what started as an inspired movement of governance visionaries may end up merely putting a more open face on an unjust and unaccountable status quo.

Among others, the journal presents an abridged version of a paper by Jonathan Fox and myself on digital technologies and government responsiveness (for full version download here).

Below is a list of all the papers:

Rosie McGee, Duncan Edwards
Tiago Peixoto, Jonathan Fox
Katharina Welle, Jennifer Williams, Joseph Pearce
Miguel Loureiro, Aalia Cassim, Terence Darko, Lucas Katera, Nyambura Salome
Elizabeth Mills
Laura Neuman
David Calleb Otieno, Nathaniel Kabala, Patta Scott-Villiers, Gacheke Gachihi, Diana Muthoni Ndung’u
Christopher Wilson, Indra de Lanerolle
Emiliano Treré


World Development Report 2016: Digital Dividends

nationalgeographic_1746433-wblive (1)

The World Development Report 2016, the main annual publication of the World Bank, is out. This year’s theme is Digital Dividends, examining the role of digital technologies in the promotion of development outcomes. The findings of the WDR are simultaneously encouraging and sobering. Those skeptical of the role of digital technologies in development might be surprised by some of the results presented in the report. Technology advocates from across the spectrum (civic tech, open data, ICT4D) will inevitably come across some facts that should temper their enthusiasm.

While some may disagree with the findings, this Report is an impressive piece of work, spread across six chapters covering different aspects of digital technologies in development: 1) accelerating growth, 2) expanding opportunities, 3) delivering services, 4) sectoral policies, 5) national priorities, 6) global cooperation. My opinion may be biased, as somebody who made some modest contributions to the Report, but I believe that, to date, this is the most thorough effort to examine the effects of digital technologies on development outcomes. The full report can be downloaded here.

The report draws, among other things, from 14 background papers that were prepared by international experts and World Bank staff. These background papers serve as additional reading for those who would like to examine certain issues more closely, such as social media, net neutrality, and the cybersecurity agenda.

For those interested in citizen participation and civic tech, one of the papers written by Prof. Jonathan Fox and myself – When Does ICT-Enabled Citizen Voice Lead to Government Responsiveness? – might be of particular interest. Below is the abstract:

This paper reviews evidence on the use of 23 information and communication technology (ICT) platforms to project citizen voice to improve public service delivery. This meta-analysis focuses on empirical studies of initiatives in the global South, highlighting both citizen uptake (‘yelp’) and the degree to which public service providers respond to expressions of citizen voice (‘teeth’). The conceptual framework further distinguishes between two trajectories for ICT-enabled citizen voice: Upwards accountability occurs when users provide feedback directly to decision-makers in real time, allowing policy-makers and program managers to identify and address service delivery problems – but at their discretion. Downwards accountability, in contrast, occurs either through real time user feedback or less immediate forms of collective civic action that publicly call on service providers to become more accountable and depends less exclusively on decision-makers’ discretion about whether or not to act on the information provided. This distinction between the ways in which ICT platforms mediate the relationship between citizens and service providers allows for a precise analytical focus on how different dimensions of such platforms contribute to public sector responsiveness. These cases suggest that while ICT platforms have been relevant in increasing policymakers’ and senior managers’ capacity to respond, most of them have yet to influence their willingness to do so.

You can download the paper here.

Any feedback on our paper or models proposed (see below, for instance) would be extremely welcome.


unpacking user feedback and civic action: difference and overlap

I also list below the links to all the background papers and their titles

Enjoy the reading.

Praising and Shaming in Civic Tech (or Reversed Nudging for Government Responsiveness) 

The other day during a talk with researcher Tanya Lokot I heard an interesting story from Russia. Disgusted with the state of their streets, activists started painting caricatures of government officials over potholes.


In the case of a central street in Saratov, the immediate response to one of these graffiti was this:  


Later on, following increased media attention – and some unexpected turnarounds – the pothole got fixed.

That reminded me of a recurrent theme in some conversations I have, which refers to whether praising and shaming matters to civic tech and, if so, to which extent. To stay with two classic examples, think of solutions such as FixMyStreet and SeeClickFix, through which citizens publically report problems to the authorities.

Considering government takes action, what prompts them to do so? At a very basic level, three hypothesis are possible:

1) Governments take action based on their access to distributed information about problems (which they supposedly are not aware of)

2) Governments take action due to the “naming and shaming” effect, avoiding to be publically perceived as unresponsive (and seeking praise for its actions)

3) Governments take action for both of the reasons above

Some could argue that hypothesis 3 is the most likely to be true, with some governments leaning more towards one reason to respond than others. Yet, the problem is that we know very little about these hypotheses, if anything. In other words – to my knowledge – we do not know whether making reports through these platforms public makes any difference whatsoever when it comes to governments’ responsiveness. Some might consider this as a useless academic exercise: as long as these tools work, who cares? But I would argue that the answer that questions matters a lot when it comes to the design of similar civic tech initiatives that aim to prompt government to action.


Let’s suppose that we find that all else equal governments are significantly more responsive to citizen reports when these are publically displayed. This would have importance both in terms of process and technological design. In terms of process, for instance, civic tech initiatives would probably be more successful if devoting part of their resources to amplify the visibility of government action and inaction (e.g. through local media). Conversely, from a technological standpoint, designers should devote substantive more effort on interfaces that maximizes praising and shaming of governments based on their performance (e.g. rankings, highlighting pending reports). Conversely, we might find that publicizing reports have very little effect in terms of responsiveness. In that case, more work would be needed to figure out which other factors – beyond will and capacity – play a role in government responsiveness (e.g. quality of reports).   

Most likely, praising and shaming would depend on a number of factors such as political competition, bureaucratic autonomy, and internal performance routines. But a finer understanding of that would not only bear an impact on the civic tech field, but across the whole accountability landscape. To date, we know very little about it. Yet, one of the untapped potential of civic technology is precisely that of conducting experiments at lowered costs. For instance, conducting randomized controlled trials on the effects on the publicization of government responsiveness should not be so complicated (e.g effects of rankings, amplifying visibility of unfixed problems). Add to that analysis of existing systems’ data from civic tech platforms, and some good qualitative work, and we might get a lot closer at figuring out what makes politicians and civil servants’ “tick”.

Until now, behavioral economics in public policy has been mainly about nudging citizens toward preferred choices. Yet it may be time to start also working in the opposite direction, nudging governments to be more responsive to citizens. Understanding whether praising and shaming works (and if so, how and to what extent) would be an important step in that direction.


Also re-posted on Civicist.

New Book on 25 Years of Participatory Budgeting

Screenshot 2014-06-09 17.17.40

A little while ago I mentioned the launch of the Portuguese version of the book organized by Nelson Dias, “Hope for Democracy: 25 Years of Participatory Budgeting Worldwide”.

The good news is that the English version is finally out. Here’s an excerpt from the introduction:

This book represents the effort  of more than forty authors and many other direct and indirect contributions that spread across different continents seek to provide an overview on the Participatory Budgeting (PB) in the World. They do so from different backgrounds. Some are researchers, others are consultants, and others are activists connected to several groups and social movements. The texts reflect this diversity of approaches and perspectives well, and we do not try to influence that.


The pages that follow are an invitation to a fascinating journey on the path of democratic innovation in very diverse cultural, political, social and administrative settings. From North America to Asia, Oceania to Europe, from Latin America to Africa, the reader will find many reasons to closely follow the proposals of the different authors.

The book  can be downloaded here [PDF]. I had the pleasure of being one of the book’s contributors, co-authoring an article with Rafael Sampaio on the use of ICT in PB processes: “Electronic Participatory Budgeting: False Dilemmas and True Complexities” [PDF].

While my perception may be biased, I believe this book will be a major contribution for researchers and practitioners in the field of participatory budgeting and citizen engagement in general. Congratulations to Nelson Dias and all the others who contributed their time and energy.

Social Accountability: What Does the Evidence Really Say?

So what does the evidence about citizen engagement say? Particularly in the development world it is common to say that the evidence is “mixed”. It is the type of answer that, even if correct in extremely general terms, does not really help those who are actually designing and implementing citizen engagement reforms.

This is why a new (GPSA-funded) work by Jonathan Fox, “Social Accountability: What does the Evidence Really Say” is a welcome contribution for those working with open government in general and citizen engagement in particular. Rather than a paper, this work is intended as a presentation that summarizes (and disentangles) some of the issues related to citizen engagement.

Before briefly discussing it, some definitional clarification. I am equating “social accountability” with the idea of citizen engagement given Jonathan’s very definition of  social accountability:

“Social accountability strategies try to improve public sector performance by bolstering both citizen engagement and government responsiveness”

In short, according to this definition, social accountability is defined, broadly, as “citizen participation” followed by government responsiveness, which encompasses practices as distinct as FOI law campaigns, participatory budgeting and referenda.

But what is new about Jonathan’s work? A lot, but here are three points that I find particularly important, based on a very personal interpretation of his work.

First, Jonathan makes an important distinction between what he defines as “tactical” and “strategic” social accountability interventions. The first type of interventions, which could also be called “naïve” interventions, are for instance those bounded in their approach (one tool-based) and those that assume that mere access to information (or data) is enough. Conversely, strategic approaches aim to deploy multiple tools and articulate society-side efforts with governmental reforms that promote responsiveness.

This distinction is important because, when examining the impact evaluation evidence, one finds that while the evidence is indeed mixed for tactical approaches, it is much more promising for strategic approaches. A blunt lesson to take from this is that when looking at the evidence, one should avoid comparing lousy initiatives with more substantive reform processes. Otherwise, it is no wonder that “the evidence is mixed.”

Second, this work makes an important re-reading of some of the literature that has found “mixed effects”, reminding us that when it comes to citizen engagement, the devil is in the details. For instance, in a number of studies that seem to say that participation does not work, when you look closer you will not be surprised that they do not work. And many times the problem is precisely the fact that there is no participation whatsoever. False negatives, as eloquently put by Jonathan.

Third, Jonathan highlights the need to bring together the “demand” (society) and “supply” (government) sides of governance. Many accountability interventions seem to assume that it is enough to work on one side or the other, and that an invisible hand will bring them together. Unfortunately, when it comes to social accountability it seems that some degree of “interventionism” is necessary in order to bridge that gap.

Of course, there is much more in Jonathan’s work than that, and it is a must read for those interested in the subject. You can download it here [PDF].