New Book on 25 Years of Participatory Budgeting

Screenshot 2014-06-09 17.17.40

A little while ago I mentioned the launch of the Portuguese version of the book organized by Nelson Dias, “Hope for Democracy: 25 Years of Participatory Budgeting Worldwide”.

The good news is that the English version is finally out. Here’s an excerpt from the introduction:

This book represents the effort  of more than forty authors and many other direct and indirect contributions that spread across different continents seek to provide an overview on the Participatory Budgeting (PB) in the World. They do so from different backgrounds. Some are researchers, others are consultants, and others are activists connected to several groups and social movements. The texts reflect this diversity of approaches and perspectives well, and we do not try to influence that.

(….)

The pages that follow are an invitation to a fascinating journey on the path of democratic innovation in very diverse cultural, political, social and administrative settings. From North America to Asia, Oceania to Europe, from Latin America to Africa, the reader will find many reasons to closely follow the proposals of the different authors.

The book  can be downloaded here [PDF]. I had the pleasure of being one of the book’s contributors, co-authoring an article with Rafael Sampaio on the use of ICT in PB processes: “Electronic Participatory Budgeting: False Dilemmas and True Complexities” [PDF].

While my perception may be biased, I believe this book will be a major contribution for researchers and practitioners in the field of participatory budgeting and citizen engagement in general. Congratulations to Nelson Dias and all the others who contributed their time and energy.

Social Accountability: What Does the Evidence Really Say?

So what does the evidence about citizen engagement say? Particularly in the development world it is common to say that the evidence is “mixed”. It is the type of answer that, even if correct in extremely general terms, does not really help those who are actually designing and implementing citizen engagement reforms.

This is why a new (GPSA-funded) work by Jonathan Fox, “Social Accountability: What does the Evidence Really Say” is a welcome contribution for those working with open government in general and citizen engagement in particular. Rather than a paper, this work is intended as a presentation that summarizes (and disentangles) some of the issues related to citizen engagement.

Before briefly discussing it, some definitional clarification. I am equating “social accountability” with the idea of citizen engagement given Jonathan’s very definition of  social accountability:

“Social accountability strategies try to improve public sector performance by bolstering both citizen engagement and government responsiveness”

In short, according to this definition, social accountability is defined, broadly, as “citizen participation” followed by government responsiveness, which encompasses practices as distinct as FOI law campaigns, participatory budgeting and referenda.

But what is new about Jonathan’s work? A lot, but here are three points that I find particularly important, based on a very personal interpretation of his work.

First, Jonathan makes an important distinction between what he defines as “tactical” and “strategic” social accountability interventions. The first type of interventions, which could also be called “naïve” interventions, are for instance those bounded in their approach (one tool-based) and those that assume that mere access to information (or data) is enough. Conversely, strategic approaches aim to deploy multiple tools and articulate society-side efforts with governmental reforms that promote responsiveness.

This distinction is important because, when examining the impact evaluation evidence, one finds that while the evidence is indeed mixed for tactical approaches, it is much more promising for strategic approaches. A blunt lesson to take from this is that when looking at the evidence, one should avoid comparing lousy initiatives with more substantive reform processes. Otherwise, it is no wonder that “the evidence is mixed.”

Second, this work makes an important re-reading of some of the literature that has found “mixed effects”, reminding us that when it comes to citizen engagement, the devil is in the details. For instance, in a number of studies that seem to say that participation does not work, when you look closer you will not be surprised that they do not work. And many times the problem is precisely the fact that there is no participation whatsoever. False negatives, as eloquently put by Jonathan.

Third, Jonathan highlights the need to bring together the “demand” (society) and “supply” (government) sides of governance. Many accountability interventions seem to assume that it is enough to work on one side or the other, and that an invisible hand will bring them together. Unfortunately, when it comes to social accountability it seems that some degree of “interventionism” is necessary in order to bridge that gap.

Of course, there is much more in Jonathan’s work than that, and it is a must read for those interested in the subject. You can download it here [PDF].

References on Evaluation of Citizen Engagement Initiatives

pic by photosteve101 on flickr

I have been doing some research on works related to the evaluation of citizen engagement initiatives (technology mediated or not).  This is far from exhaustive, but I thought it would be worth sharing with those who stop by here. Also, any help with identifying other relevant sources that I may be missing would be greatly appreciated.

Who Participates in Africa? Dispelling the Myth

picture by Britanny Danisch on flickr.

Whenever discussing participation in Africa (as well as in other developing contexts) the issue of “who participates” often emerges. A constant in these conversations is an assumption that participation in the continent is strongly driven by material and immaterial resources (e.g. money, time). In other words, there seems to be a widespread belief, particularly among development practitioners,  that the most privileged sectors of society are disproportionately more likely to participate than the least well-off.

In part, such an assumption is empirically grounded. Since the early 70s,  studies have shown inequality in political participation, with the most advantaged groups being disproportionately more likely to participate. Considering that policy preferences between groups differ, unequal participation leads to the troubling possibility that public policy is skewed towards favoring the better off, thus further deepening societal differences and access to public goods and services.

However, often ignored is the fact that most of these studies refer to  participation in traditional western democracies, notably the United States and European countries. But do these results hold true when it comes to participation in Africa? This is the question that Ann-Sofie Isaksson (University of Gothenburg) tackles in a paper published in Electoral Studies “Political participation in Africa: The role of individual resources”.

By looking at an Afrobarometer dataset of 27,000 respondents across 20 African countries, Isaksson’s findings challenge the standard thinking on the relationship between resources and participation:

(…) it seems the resource perspective does a poor job at explaining African political participation. If a resource is relevant for meeting the costs of participating, more of that resource should mean more participation. If anything, however, the estimations suggest that having little time (i.e. working full-time) and little money (i.e. being poorer) is associated with more participation.

Isaksson’s analysis is not confined to participation in elections, also examining non-electoral participation, i.e. attending community meetings. With regard to the latter only, there are modest effects associated with exposure to information  (e.g. radio, TV, newspapers) and education. Yet, as the author notes, “the result that community attendance is higher among the poor remains”.

To conclude, as underlined by Isaksson in her paper, she is not alone in terms of findings, with other studies looking at Asia and Latin America pointing in a similar direction, slowly dispelling the myth of the role of resources for participation in developing countries. Development practitioners should start to take note of these encouraging facts.

***

P.s.: An earlier ungated version of the paper can be found here [PDF].

A Review of the Evidence on Open Budgeting

Brand new.

“A Review of the Evidence on Open Budgeting” is a recent report by the World Bank Institute’sCapacity Development and Results team. It explores key questions and existing evidence around the impact of open budgeting. Despite the growing body of literature, there remains limited substantiation for whether and how open budgeting contributes to reductions in poverty and improvements in the lives of the poor. This report pieces together the results chain presenting evidence for and against from the literature. It explores links between open budgeting and indicators of impact such as human development and public service delivery. The findings highlight the importance of measuring budget transparency, accountability, and participation. The findings show that the impact of institutional changes differ under varying conditions in specific contexts. The conclusions of the report point to the need for further investigation into impact and establishing effective measurement practices for monitoring related institutional change under varying conditions and different contexts.”

You can download the report here [PDF].

Participation, Transparency and Accountability: Innovations in South Korea, Brazil, and the Philippines

A report by Brian Wampler for the Global Initiative for Fiscal Transparency (GIFT):

Citizen participation in budgetary and other fiscal processes has been expanding at international, national, and local levels over the past 15 years. The direct participation of citizens, it is hoped, will improve governance, limit misuse of public funds, and produce more informed, engaged citizens. At the national level, reformist governments now encourage the direct engagement of citizens during multiple moments of the policy cycle—from initial policy formulation to the oversight of policy implementation. Reformist governments hope to take advantage of increased citizen participation to increase their legitimacy, thus allowing them to change spending and policy priorities, increase state effectiveness by make public bureaucrats more responsive to citizens and elected officials, and, finally, ensure that the quality of public services improves. During the 1980s and 1990s, many subnational governments took advantage of policy decentralization to experiment with new institutional types. Direct citizen participation has been most robust at subnational levels due to the decreased costs and the greater direct impact of citizens on policymaking.

(….)

The main purpose of this report is to examine how three countries, South Korea, Brazil, and the Philippines, have made extensive efforts to create new institutions and policies that encourage the participation of citizens and CSOs in complex policy processes. South Korea developed an institutional arrangement based on policy experts, CSOs, and the Korean Development Institute. Brazil uses a model that relies extensively on the participation of citizens at multiple tiers of government. Finally, the Philippines use a mixed model that incorporates citizens and CSOs at national and subnational levels

(….)

Political reformers seeking to incorporate greater numbers of people into policymaking venues face a series of challenges. These include: (1) asymmetrical access to information as well as differing skills base to interpret information; (2) the difficultly of decision-making when groups grow in size; (3) a reduction in the importance of any single participant due to the greater number of participants; (4) political contestation over who has the right to participate; (5) who are the legitimate representatives of different groups; and (6) higher organizational costs (time, money, personnel). This report maps out how new participatory institutions and programs that are designed to help governments and their civil society allies draw citizens directly into decision-making processes.To explain the variation in the type of participatory experiences now used by different countries,we identify four factors that most strongly affect the types of participation-oriented reforms as well as the results. These four factors include: (a) presidential-level support for reform, (b) the configuration of civil society, (c) state capacity and (d) the geo-political direction of reform (topdown/center –periphery vs. bottom-up/periphery/center. It is the combination of these four factors that most strongly explains the type of institutions adopted in each of these countries.

Read the full report here [PDF]. 

When Citizen Engagement Saves Lives (and what we can learn from it)

When it comes to the relationship between participatory institutions and development outcomes, participatory budgeting stands out as one of the best examples out there. For instance, in a paper recently published in World Development,  Sonia Gonçalves finds that municipalities that adopted participatory budgeting in Brazil “favoured an allocation of public expenditures that closely matched the popular preferences and channeled a larger fraction of their total budget to key investments in sanitation and health services.”  As a consequence, the author also finds that this change in the allocation of public expenditures “is associated with a pronounced reduction in the infant mortality rates for municipalities which adopted participatory budgeting.”

Evolution of Expenditure Share in Health and Sanitation compared between adopters and non-adopters of PB (Goncalves 2013).

Evolution of  the share of expenditures in health and sanitation compared between adopters and non-adopters of participatory budgeting (Goncalves 2013).

Now, in an excellent new article published in Comparative Political Studies, the authors Michael Touchton and Brian Wampler come up with similar findings (abstract):

We evaluate the role of a new type of democratic institution, participatory budgeting (PB), for improving citizens’ well-being. Participatory institutions are said to enhance governance, citizens’ empowerment, and the quality of democracy, creating a virtuous cycle to improve the poor’s well-being. Drawing from an original database of Brazil’s largest cities over the last 20 years, we assess whether adopting PB programs influences several indicators of well-being inputs, processes, and outcomes. We find PB programs are strongly associated with increases in health care spending, increases in civil society organizations, and decreases in infant mortality rates. This connection strengthens dramatically as PB programs remain in place over longer time frames. Furthermore, PB’s connection to well-being strengthens in the hand of mayors from the nationally powerful, ideologically and electorally motivated Workers’ Party. Our argument directly addresses debates on democracy and well-being and has powerful implications for participation, governance, and economic development.

When put together, these findings provide compelling evidence for those who – often unfamiliar with the literature – question the effectiveness of participatory governance institutions. Surely, more research is needed, and different citizen engagement initiatives (and contexts) may lead to different results.

But these articles also bring another important takeaway for those working with development and public sector reform. And that is the need to consider the fact that participatory institutions (as most institutional reforms) may take time to produce desirable/noticeable effects. As noted by Touchton and Wampler:

 The relationships we describe between PB and health and sanitation spending, PB and CSOs, and PB and health care outcomes in this section are greater in magnitude and stronger in statistical significance for municipalities that have used PB for a longer period of time. Municipalities using PB for less than 4 years do exhibit lower infant mortality rates than municipalities that never adopted PB. However, there is no statistically significant difference in spending on health care and sanitation between municipalities using PB for less than 4 years and municipalities that never adopted the program. This demonstrates the benefits from adopting PB are not related to low-hanging fruit, but built over a great number of years. Our results imply PB is associated with long-term institutional and political change—not just short-term shifts in funding priorities .

If throughout the years participatory budgeting has produced  evidence of its effectiveness on a number of fronts (e.g. pro-poor spending), it is only 25 years after its first implementation in Brazil that we start to see systematic evidence of sound development outcomes such as reduction in infant mortality. In other words, rushing to draw conclusions at early stages of participatory governance interventions may result in misleading assessments. Even worse, it may lead to discontinuing efforts that are yet to bear fruit in the medium and longer terms.

10 Most Read Posts in 2013

Below is a selection of the 10 most read posts at DemocracySpot in 2013. Thanks to all of those who stopped by throughout the year, and happy 2014.

1. Does transparency lead to trust? Some evidence on the subject.

2. The Foundations of Motivation for Citizen Engagement

3. Open Government, Feedback Loops, and Semantic Extravaganza

4. Open Government and Democracy

5. What’s Wrong with e-Petitions and How to Fix them

6. Lawrence Lessig on Sortition and Citizen Participation

7. Unequal Participation: Open Government’s Unresolved Dilemma

8. The Effect of SMS on Participation: Evidence from Uganda

9. The Uncertain Relationship Between Open Data and Accountability

10. Lisbon Revisited: Notes on Participation

Rethinking Why People Participate

Screen Shot 2013-12-29 at 9.08.46 PM

Having a refined understanding of what leads people to participate is one of the main concerns of those working with citizen engagement. But particularly when it comes to participatory democracy, that understanding is only partial and, most often, the cliché “more research is needed” is definitely applicable. This is so for a number of reasons, four of which are worth noting here.

  1. The “participatory” label is applied to greatly varied initiatives, raising obvious methodological challenges for comparative research and cumulative learning. For instance, while both participatory budgeting and online petitions can be roughly categorized as “participatory” processes, they are entirely different in terms of fundamental aspects such as their goals, institutional design and expected impact on decision-making.
  2. The fact that many participatory initiatives are conceived as “pilots” or one-off events gives researchers little time to understand the phenomenon, come up with sound research questions, and test different hypotheses over time.  The “pilotitis” syndrome in the tech4accountability space is a good example of this.
  3. When designing and implementing participatory processes, in the face of budget constraints the first victims are documentation, evaluation and research. Apart from a few exceptions, this leads to a scarcity of data and basic information that undermines even the most heroic “archaeological” efforts of retrospective research and evaluation (a far from ideal approach).
  4. The semantic extravaganza that currently plagues the field of citizen engagement, technology and open government makes cumulative learning all the more difficult.

Precisely for the opposite reasons, our knowledge of electoral participation is in better shape. First, despite the differences between elections, comparative work is relatively easy, which is attested by the high number of cross-country studies in the field. Second, the fact that elections (for the most part) are repeated regularly and following a similar design enables the refinement of hypotheses and research questions over time, and specific time-related analysis (see an example here [PDF]). Third, when compared to the funds allocated to research in participatory initiatives, the relative amount of resources channeled into electoral studies and voting behavior is significantly higher. Here I am not referring to academic work only but also to the substantial resources invested by the private sector and parties towards a better understanding of elections and voting behavior. This includes a growing body of knowledge generated by get-out-the-vote (GOTV) research, with fascinating experimental evidence from interventions that seek to increase participation in elections (e.g. door-to-door campaigns, telemarketing, e-mail). Add to that the wealth of electoral data that is available worldwide (in machine-readable formats) and you have some pretty good knowledge to tap into. Finally, both conceptually and terminologically, the field of electoral studies is much more consistent than the field of citizen engagement which, in the long run, tends to drastically impact how knowledge of a subject evolves.

These reasons should be sufficient to capture the interest of those who work with citizen engagement. While the extent to which the knowledge from the field of electoral participation can be transferred to non-electoral participation remains an open question, it should at least provide citizen engagement researchers with cues and insights that are very much worth considering.

This is why I was particularly interested in an article from a recently published book, The Behavioral Foundations of Public Policy (Princeton). Entitled “Rethinking Why People Vote: Voting as Dynamic Social Expression”, the article is written by Todd Rogers, Craig Fox and Alan Berger. Taking a behavioralist stance, the authors start by questioning the usefulness of the rationalist models in explaining voting behavior:

“In these [rationalist] models citizens are seen as weighing the anticipated trouble they must go through in order to cast their votes, against the likelihood that their vote will improve the outcome of an election times the magnitude of that improvement. Of course, these models are problematic because the likelihood of casting in the deciding vote is often hopelessly small. In a typical state or national election, a person faces a higher probability of being struck by a car on the way to his or her polling location than of casting the deciding vote.”

(BTW, if you are a voter in certain US states, the odds of being hit by a meteorite are greater than those of casting the deciding vote).

Following on from the fact that traditional models cannot fully explain why and under which conditions citizens vote, the authors develop a framework that considers voting as a “self-expressive voting behavior that is influenced by events occurring before and after the actual moment of casting a vote.” To support their claims, throughout the article the authors build upon existing evidence from GOTV campaigns and other behavioral research. Besides providing a solid overview of the literature in the field, the authors express compelling arguments for mobilizing electoral participation. Below are a few excerpts from the article with some of the main takeaways:

  • Mode of contact: the more personal it is, the more effective it is

“Initial experimental research found that a nonpartisan face-to-face canvassing effort had a 5-8 percentage point mobilizing effect in an uncontested midterm elections in 1998 (Gerber and Green 2000) compared to less than a 1 percentage point mobilizing effect for live phone calls and mailings. More than three dozen subsequent experiments have overwhelmingly supported the original finding (…)”

“Dozens of experiments have examined the effectiveness of GOTV messages delivered by the telephone. Several general findings emerge, all of which are consistent with the broad conclusion that the more personal a GOTV strategy, the more effective. (…) the most effective calls are conducted in an unhurried, “chatty manner.”

“The least personal and the least effective GOTV communication channels entail one way communications. (…) written pieces encouraging people vote that are mailed directly to households have consistently been shown to produce a mall, but positive, increase in turnout.”

  • Voting is affected by events before and after the decision

“One means to facilitate the performance of a socially desirable behavior is to ask people to predict whether they will perform the behavior in the future. In order to present oneself in a favorable light or because of wishful thinking or both, people are generally biased to answer in the affirmative. Moreover, a number of studies have found that people are more likely to follow through on a behavior after they predicted that they will do so (….) Emerging social-networking technologies provide new opportunities for citizens to commit to each other that they will turnout in a given election. These tools facilitate making one’s commitments public, and they also allow for subsequently accountability following an election (…) Asking people to form a specific if-then plan of action, or implementation intention, reduces the cognitive costs of having to remember to pursue an action that one intends to perform. Research shows that when people articulate the how, when and where of their plan to implement an intended behavior, they are more likely to follow through.”

(Not coincidentally, as noted by Sasha Issenberg in his book The Victory Lab, during the 2010 US presidential election millions of democrats received an email reminding them that they had “made a commitment to vote in this election” and that “the time has come to make good on that commitment. Think about when you’ll cast your vote and how you’ll get there.”)

“ (…) holding a person publicly accountable for whether or not she voted may increase her tendency to do so. (…) Studies have found that when people are merely made aware that their behavior will be publicly known, they become more likely to behaving in ways that are consistent with how they believe others think they should behave. (…) At least, at one point Italy exposed those who failed to vote by posting the names of nonvoters outside of local town halls.”

(On the accountability issue, also read this fascinating study [PDF] by Gerber, Green & Larimer)

  • Following the herd: affinitive and belonging needs

“People are strongly motivated to maintain feelings of belonging with others and to affiliate with others. (…) Other GOTV strategies that can increase turnout by serving social needs could involve encouraging people to go to their polling place in groups (i.e., a buddy system), hosting after-voting parties on election day, or encouraging people to talk about voting with their friends, to name a few.”

“(…) studies showed that the motivation to vote significantly increased when participants heard a message that emphasized high expected turnout as opposed to low expected turnout. For example, in the New Jersey study, 77% of the participants who heard the high-turnout script reported being “absolutely certain” that they would vote, compared to 71% of those who heard the low-turnout script. This research also found that moderate and infrequent voters were strongly affected by the turnout information.”

  • Voting as an expression of identity

“(….) citizens can derive value from voting through what the act displays about their identities. People are willing to go to great lengths, and pay great costs, to express that they are a particular kind of person. (….) Experimenters asked participants to complete a fifteen-minute survey that related to an election that was to occur the following week. After completing the survey, the experimenter reviewed the results and reported to participants what their responses indicated. Participants were, in fact, randomly assigned to one of two conditions. Participants in the first condition were labeled as being “above-average citizens[s] … who [are] very likely to vote,” whereas participants in the second condition were labeled as being “average citizen[s] … with an average likelihood of voting. (….) These identity labels proved to have substantial impact on turnout, with 87% of “above average” participants voting versus 75% of “average” participants voting.”

For those working with participatory governance, the question that remains is the extent to which each of these lessons is applicable to non-electoral forms of participation. The differences between electoral and non-electoral forms of participation may cause these techniques to generate very different results. One difference relates to public awareness about participation opportunities. While it would be safe to say that during an important election the majority of citizens are aware of it, the opposite is true for most existing participatory events, where generally only a minority is aware of their existence. In this case, it is unclear whether the impact of mobilization campaigns would be more or less significant when awareness about an event is low. Furthermore, if the act of voting may be automatically linked to a sense of civic duty, would that still hold true for less typical forms of participation (e.g. signing an online petition, attending a community meeting)?

The answer to this “transferability” question is an empirical one, and one that is yet to be answered.  The good news is that while experiments that generate this kind of knowledge are normally resource intensive, the costs of experimentation are driven down when it comes to technology-mediated citizen participation. The use of A/B testing during the Obama campaign is a good example. Below is an excellent account by Dan Siroker on how they conducted online experiments during the presidential campaign.

Bringing similar experiments to other realms of digital participation is the next logical step for those working in the field. Some organizations have already started to take this seriously . The issue is whether others, including governments and donors, will do the same.

Why ‘I-Paid-A-Bribe’ Worked in India but Failed in China

source: China Daily

Interesting paper by Yuen Yuen Ang, Political Scientist at the University of Michigan:

Authoritarian states restrain online activism not only through repression and censorship, but also by indirectly weakening the ability of netizens to self-govern and constructively engage the state. I demonstrate this argument by comparing I-Paid-A-Bribe (IPAB) — a crowd-sourcing platform that collects anonymous reports of petty bribery — in India and China. Whereas IPAB originated and has thrived in India, a copycat effort in China fizzled out within months. Contrary to those who attribute China’s failed outcome to repression, I find that even before authorities shut down IPAB, the sites were already plagued by internal organizational problems that were comparatively absent in India. The study tempers expectations about the revolutionary effects of new media in mobilizing contention and checking corruption in the absence of a strong civil society.

And a brief video with Yuen Yuen

Also read

I Paid a Bribe. So What? 

Open Government and Democracy