Catching up (again!) on DemocracySpot

cover-bookIt’s been a while since the last post here. In compensation, it’s not been a bad year in terms of getting some research out there. First, we finally managed to publish “Civic Tech in the Global South: Assessing Technology for the Public Good.” With a foreword by Beth Noveck, the book is edited by Micah Sifry and myself, with contributions by Evangelia Berdou, Martin Belcher, Jonathan Fox, Matt Haikin, Claudia Lopes, Jonathan Mellon and Fredrik Sjoberg.

The book is comprised of one study and three field evaluations of civic tech initiatives in developing countries. The study reviews evidence on the use of twenty-three information and communication technology (ICT) platforms designed to amplify citizen voices to improve service delivery. Focusing on empirical studies of initiatives in the global south, the authors highlight both citizen uptake (yelp) and the degree to which public service providers respond to expressions of citizen voice (teeth). The first evaluation looks at U-Report in Uganda, a mobile platform that runs weekly large-scale polls with young Ugandans on a number of issues, ranging from access to education to early childhood development. The following evaluation takes a closer look at MajiVoice, an initiative that allows Kenyan citizens to report, through multiple channels, complaints with regard to water services. The third evaluation examines the case of Rio Grande do Sul’s participatory budgeting – the world’s largest participatory budgeting system – which allows citizens to participate either online or offline in defining the state’s yearly spending priorities. While the comparative study has a clear focus on the dimension of government responsiveness, the evaluations examine civic technology initiatives using five distinct dimensions, or lenses. The choice of these lenses is the result of an effort bringing together researchers and practitioners to develop an evaluation framework suitable to civic technology initiatives.

The book was a joint publication by The World Bank and Personal Democracy Press. You can download the book for free here.

Women create fewer online petitions than men — but they’re more successful

clinton

Another recent publication was a collaboration between Hollie R. Gilman, Jonathan Mellon, Fredrik Sjoberg and myself. By examining a dataset covering Change.org online petitions from 132 countries, we assess whether online petitions may help close the gap in participation and representation between women and men. Tony Saich, director of Harvard’s Ash Center for Democratic Innovation (publisher of the study), puts our research into context nicely:

The growing access to digital technologies has been considered by democratic scholars and practitioners as a unique opportunity to promote participatory governance. Yet, if the last two decades is the period in which connectivity has increased exponentially, it is also the moment in recent history that democratic growth has stalled and civic spaces have shrunk. While the full potential of “civic technologies” remains largely unfulfilled, understanding the extent to which they may further democratic goals is more pressing than ever. This is precisely the task undertaken in this original and methodologically innovative research. The authors examine online petitions which, albeit understudied, are one of the fastest growing types of political participation across the globe. Drawing from an impressive dataset of 3.9 million signers of online petitions from 132 countries, the authors assess the extent to which online participation replicates or changes the gaps commonly found in offline participation, not only with regards to who participates (and how), but also with regards to which petitions are more likely to be successful. The findings, at times counter-intuitive, provide several insights for democracy scholars and practitioners alike. The authors hope this research will contribute to the larger conversation on the need of citizen participation beyond electoral cycles, and the role that technology can play in addressing both new and persisting challenges to democratic inclusiveness.

But what do we find? Among other interesting things, we find that while women create fewer online petitions than men, they’re more successful at it! This article in the Washington Post summarizes some of our findings, and you can download the full study here.

Other studies that were recently published include:

The Effect of Bureaucratic Responsiveness on Citizen Participation (Public Administration Review)

Abstract:

What effect does bureaucratic responsiveness have on citizen participation? Since the 1940s, attitudinal measures of perceived efficacy have been used to explain participation. The authors develop a “calculus of participation” that incorporates objective efficacy—the extent to which an individual’s participation actually has an impact—and test the model against behavioral data from the online application Fix My Street (n = 399,364). A successful first experience using Fix My Street is associated with a 57 percent increase in the probability of an individual submitting a second report, and the experience of bureaucratic responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of responsiveness for fostering an active citizenry while demonstrating the value of incidentally collected data to examine participatory behavior at the individual level.

Does online voting change the outcome? Evidence from a multi-mode public policy referendum (Electoral Studies)

Abstract:

Do online and offline voters differ in terms of policy preferences? The growth of Internet voting in recent years has opened up new channels of participation. Whether or not political outcomes change as a consequence of new modes of voting is an open question. Here we analyze all the votes cast both offline (n = 5.7 million) and online (n = 1.3 million) and compare the actual vote choices in a public policy referendum, the world’s largest participatory budgeting process, in Rio Grande do Sul in June 2014. In addition to examining aggregate outcomes, we also conducted two surveys to better understand the demographic profiles of who chooses to vote online and offline. We find that policy preferences of online and offline voters are no different, even though our data suggest important demographic differences between offline and online voters.

We still plan to publish a few more studies this year, one looking at digitally-enabled get-out-the-vote (GOTV) efforts, and two others examining the effects of participatory governance on citizens’ willingness to pay taxes (including a fun experiment in 50 countries across all continents).

In the meantime, if you are interested in a quick summary of some of our recent research findings, this 30 minutes video of my keynote at the last TicTEC Conference in Florence should be helpful.

 

 

New Evidence that Citizen Engagement Increases Tax Revenues

pic by Tax Credits on flickr

Quite a while ago, drawing mainly from the literature on tax morale, I posted about the evidence on the relationship between citizen engagement and tax revenues, in which participatory processes lead to increased tax compliance (as a side note, I’m still surprised how those working with citizen engagement are unaware of this evidence).

Until very recently this evidence was based on observational studies, both qualitative and quantitative. Now we have – to my knowledge – the first experimental evidence that links citizen participation and tax compliance. A new working paper published by Diether Beuermann and Maria Amelina present the results of a randomized experiment in Russia, described in the abstract below:

This paper provides the first experimental evaluation of the participatory budgeting model showing that it increased public participation in the process of public decision making, increased local tax revenues collection, channeled larger fractions of public budgets to services stated as top priorities by citizens, and increased satisfaction levels with public services. These effects, however, were found only when the model was implemented in already-mature administratively and politically decentralized local governments. The findings highlight the importance of initial conditions with respect to the decentralization context for the success of participatory governance.

In my opinion, this paper is important for a number of reasons, some of which are worth highlighting here. First, it adds substantive support to the evidence on the positive relationship between citizen engagement and tax revenues. Second, in contrast to studies suggesting that participatory innovations are most likely to work when they are “organic”, or “bottom-up”, this paper shows how external actors can induce the implementation of successful participatory experiences. Third, I could not help but notice that two commonplace explanations for the success of citizen engagement initiatives, “strong civil society” and “political will”, do not feature in the study as prominent success factors.  Last, but not least, the paper draws attention to how institutional settings matter (i.e. decentralization). Here, the jack-of-all-trades (yet not very useful) “context matters”, could easily be replaced by “institutions matter”.

You can read the full paper here [PDF].

Now the paper: Evidence of Social Accountability Initiatives

sandwichstrategyfox

A little while ago I wrote about Jonathan Fox’s work on the evidence of social accountability initiatives. Initially in the format of a PDF slide presentation, it has now been turned into a magnificent paper, the first of the GPSA working paper series. Below is the abstract:

Policy discussion of social accountability initiatives has increasingly has increasingly focused on questions about their tangible development impacts. The empirical evidence is mixed. This meta-analysis rethinks some of the most influential evaluations through a new lens: the distinction between tactical and strategic approaches to the promotion of citizen voice to contribute to improved public sector performance. Field experiments tend to study bounded, tactical interventions that rely on optimistic assumptions about the power of information alone both to motivate collective action and to influence public sector performance. More promising results emerge from studies of multi-pronged strategies that encourage enabling environments for collective action and bolster state capacity to actually respond to citizen voice. This reinterpretation of the empirical evidence leads to a proposed new series of grounded propositions that focus on state-society synergy and sandwich strategies through which ‘voice’ and ‘teeth’ can become mutually empowering.

You can download the paper here: Social Accountability: What does the Evidence Really Say [PDF]. You can also read my take on the main lessons from Jonathan’s work here. Enjoy the reading.

***

PS: I have been away for a while doing field work, but hope to start posting (more or less) regularly soon.

Social Accountability: What Does the Evidence Really Say?

So what does the evidence about citizen engagement say? Particularly in the development world it is common to say that the evidence is “mixed”. It is the type of answer that, even if correct in extremely general terms, does not really help those who are actually designing and implementing citizen engagement reforms.

This is why a new (GPSA-funded) work by Jonathan Fox, “Social Accountability: What does the Evidence Really Say” is a welcome contribution for those working with open government in general and citizen engagement in particular. Rather than a paper, this work is intended as a presentation that summarizes (and disentangles) some of the issues related to citizen engagement.

Before briefly discussing it, some definitional clarification. I am equating “social accountability” with the idea of citizen engagement given Jonathan’s very definition of  social accountability:

“Social accountability strategies try to improve public sector performance by bolstering both citizen engagement and government responsiveness”

In short, according to this definition, social accountability is defined, broadly, as “citizen participation” followed by government responsiveness, which encompasses practices as distinct as FOI law campaigns, participatory budgeting and referenda.

But what is new about Jonathan’s work? A lot, but here are three points that I find particularly important, based on a very personal interpretation of his work.

First, Jonathan makes an important distinction between what he defines as “tactical” and “strategic” social accountability interventions. The first type of interventions, which could also be called “naïve” interventions, are for instance those bounded in their approach (one tool-based) and those that assume that mere access to information (or data) is enough. Conversely, strategic approaches aim to deploy multiple tools and articulate society-side efforts with governmental reforms that promote responsiveness.

This distinction is important because, when examining the impact evaluation evidence, one finds that while the evidence is indeed mixed for tactical approaches, it is much more promising for strategic approaches. A blunt lesson to take from this is that when looking at the evidence, one should avoid comparing lousy initiatives with more substantive reform processes. Otherwise, it is no wonder that “the evidence is mixed.”

Second, this work makes an important re-reading of some of the literature that has found “mixed effects”, reminding us that when it comes to citizen engagement, the devil is in the details. For instance, in a number of studies that seem to say that participation does not work, when you look closer you will not be surprised that they do not work. And many times the problem is precisely the fact that there is no participation whatsoever. False negatives, as eloquently put by Jonathan.

Third, Jonathan highlights the need to bring together the “demand” (society) and “supply” (government) sides of governance. Many accountability interventions seem to assume that it is enough to work on one side or the other, and that an invisible hand will bring them together. Unfortunately, when it comes to social accountability it seems that some degree of “interventionism” is necessary in order to bridge that gap.

Of course, there is much more in Jonathan’s work than that, and it is a must read for those interested in the subject. You can download it here [PDF].

When Citizen Engagement Saves Lives (and what we can learn from it)

When it comes to the relationship between participatory institutions and development outcomes, participatory budgeting stands out as one of the best examples out there. For instance, in a paper recently published in World Development,  Sonia Gonçalves finds that municipalities that adopted participatory budgeting in Brazil “favoured an allocation of public expenditures that closely matched the popular preferences and channeled a larger fraction of their total budget to key investments in sanitation and health services.”  As a consequence, the author also finds that this change in the allocation of public expenditures “is associated with a pronounced reduction in the infant mortality rates for municipalities which adopted participatory budgeting.”

Evolution of Expenditure Share in Health and Sanitation compared between adopters and non-adopters of PB (Goncalves 2013).

Evolution of  the share of expenditures in health and sanitation compared between adopters and non-adopters of participatory budgeting (Goncalves 2013).

Now, in an excellent new article published in Comparative Political Studies, the authors Michael Touchton and Brian Wampler come up with similar findings (abstract):

We evaluate the role of a new type of democratic institution, participatory budgeting (PB), for improving citizens’ well-being. Participatory institutions are said to enhance governance, citizens’ empowerment, and the quality of democracy, creating a virtuous cycle to improve the poor’s well-being. Drawing from an original database of Brazil’s largest cities over the last 20 years, we assess whether adopting PB programs influences several indicators of well-being inputs, processes, and outcomes. We find PB programs are strongly associated with increases in health care spending, increases in civil society organizations, and decreases in infant mortality rates. This connection strengthens dramatically as PB programs remain in place over longer time frames. Furthermore, PB’s connection to well-being strengthens in the hand of mayors from the nationally powerful, ideologically and electorally motivated Workers’ Party. Our argument directly addresses debates on democracy and well-being and has powerful implications for participation, governance, and economic development.

When put together, these findings provide compelling evidence for those who – often unfamiliar with the literature – question the effectiveness of participatory governance institutions. Surely, more research is needed, and different citizen engagement initiatives (and contexts) may lead to different results.

But these articles also bring another important takeaway for those working with development and public sector reform. And that is the need to consider the fact that participatory institutions (as most institutional reforms) may take time to produce desirable/noticeable effects. As noted by Touchton and Wampler:

 The relationships we describe between PB and health and sanitation spending, PB and CSOs, and PB and health care outcomes in this section are greater in magnitude and stronger in statistical significance for municipalities that have used PB for a longer period of time. Municipalities using PB for less than 4 years do exhibit lower infant mortality rates than municipalities that never adopted PB. However, there is no statistically significant difference in spending on health care and sanitation between municipalities using PB for less than 4 years and municipalities that never adopted the program. This demonstrates the benefits from adopting PB are not related to low-hanging fruit, but built over a great number of years. Our results imply PB is associated with long-term institutional and political change—not just short-term shifts in funding priorities .

If throughout the years participatory budgeting has produced  evidence of its effectiveness on a number of fronts (e.g. pro-poor spending), it is only 25 years after its first implementation in Brazil that we start to see systematic evidence of sound development outcomes such as reduction in infant mortality. In other words, rushing to draw conclusions at early stages of participatory governance interventions may result in misleading assessments. Even worse, it may lead to discontinuing efforts that are yet to bear fruit in the medium and longer terms.