Catching up (again!) on DemocracySpot

cover-bookIt’s been a while since the last post here. In compensation, it’s not been a bad year in terms of getting some research out there. First, we finally managed to publish “Civic Tech in the Global South: Assessing Technology for the Public Good.” With a foreword by Beth Noveck, the book is edited by Micah Sifry and myself, with contributions by Evangelia Berdou, Martin Belcher, Jonathan Fox, Matt Haikin, Claudia Lopes, Jonathan Mellon and Fredrik Sjoberg.

The book is comprised of one study and three field evaluations of civic tech initiatives in developing countries. The study reviews evidence on the use of twenty-three information and communication technology (ICT) platforms designed to amplify citizen voices to improve service delivery. Focusing on empirical studies of initiatives in the global south, the authors highlight both citizen uptake (yelp) and the degree to which public service providers respond to expressions of citizen voice (teeth). The first evaluation looks at U-Report in Uganda, a mobile platform that runs weekly large-scale polls with young Ugandans on a number of issues, ranging from access to education to early childhood development. The following evaluation takes a closer look at MajiVoice, an initiative that allows Kenyan citizens to report, through multiple channels, complaints with regard to water services. The third evaluation examines the case of Rio Grande do Sul’s participatory budgeting – the world’s largest participatory budgeting system – which allows citizens to participate either online or offline in defining the state’s yearly spending priorities. While the comparative study has a clear focus on the dimension of government responsiveness, the evaluations examine civic technology initiatives using five distinct dimensions, or lenses. The choice of these lenses is the result of an effort bringing together researchers and practitioners to develop an evaluation framework suitable to civic technology initiatives.

The book was a joint publication by The World Bank and Personal Democracy Press. You can download the book for free here.

Women create fewer online petitions than men — but they’re more successful

clinton

Another recent publication was a collaboration between Hollie R. Gilman, Jonathan Mellon, Fredrik Sjoberg and myself. By examining a dataset covering Change.org online petitions from 132 countries, we assess whether online petitions may help close the gap in participation and representation between women and men. Tony Saich, director of Harvard’s Ash Center for Democratic Innovation (publisher of the study), puts our research into context nicely:

The growing access to digital technologies has been considered by democratic scholars and practitioners as a unique opportunity to promote participatory governance. Yet, if the last two decades is the period in which connectivity has increased exponentially, it is also the moment in recent history that democratic growth has stalled and civic spaces have shrunk. While the full potential of “civic technologies” remains largely unfulfilled, understanding the extent to which they may further democratic goals is more pressing than ever. This is precisely the task undertaken in this original and methodologically innovative research. The authors examine online petitions which, albeit understudied, are one of the fastest growing types of political participation across the globe. Drawing from an impressive dataset of 3.9 million signers of online petitions from 132 countries, the authors assess the extent to which online participation replicates or changes the gaps commonly found in offline participation, not only with regards to who participates (and how), but also with regards to which petitions are more likely to be successful. The findings, at times counter-intuitive, provide several insights for democracy scholars and practitioners alike. The authors hope this research will contribute to the larger conversation on the need of citizen participation beyond electoral cycles, and the role that technology can play in addressing both new and persisting challenges to democratic inclusiveness.

But what do we find? Among other interesting things, we find that while women create fewer online petitions than men, they’re more successful at it! This article in the Washington Post summarizes some of our findings, and you can download the full study here.

Other studies that were recently published include:

The Effect of Bureaucratic Responsiveness on Citizen Participation (Public Administration Review)

Abstract:

What effect does bureaucratic responsiveness have on citizen participation? Since the 1940s, attitudinal measures of perceived efficacy have been used to explain participation. The authors develop a “calculus of participation” that incorporates objective efficacy—the extent to which an individual’s participation actually has an impact—and test the model against behavioral data from the online application Fix My Street (n = 399,364). A successful first experience using Fix My Street is associated with a 57 percent increase in the probability of an individual submitting a second report, and the experience of bureaucratic responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of responsiveness for fostering an active citizenry while demonstrating the value of incidentally collected data to examine participatory behavior at the individual level.

Does online voting change the outcome? Evidence from a multi-mode public policy referendum (Electoral Studies)

Abstract:

Do online and offline voters differ in terms of policy preferences? The growth of Internet voting in recent years has opened up new channels of participation. Whether or not political outcomes change as a consequence of new modes of voting is an open question. Here we analyze all the votes cast both offline (n = 5.7 million) and online (n = 1.3 million) and compare the actual vote choices in a public policy referendum, the world’s largest participatory budgeting process, in Rio Grande do Sul in June 2014. In addition to examining aggregate outcomes, we also conducted two surveys to better understand the demographic profiles of who chooses to vote online and offline. We find that policy preferences of online and offline voters are no different, even though our data suggest important demographic differences between offline and online voters.

We still plan to publish a few more studies this year, one looking at digitally-enabled get-out-the-vote (GOTV) efforts, and two others examining the effects of participatory governance on citizens’ willingness to pay taxes (including a fun experiment in 50 countries across all continents).

In the meantime, if you are interested in a quick summary of some of our recent research findings, this 30 minutes video of my keynote at the last TicTEC Conference in Florence should be helpful.

 

 

New Papers Published: FixMyStreet and the World’s Largest Participatory Budgeting

2016_7_5_anderson-lopes_consulta-popular_virtual

Voting in Rio Grande do Sul’s Participatory Budgeting  (picture by Anderson Lopes)

Here are two new published papers that my colleagues Jon Mellon, Fredrik Sjoberg and myself have been working on.

The first, The Effect of Bureaucratic Responsiveness on Citizen Participation, published in Public Administration Review, is – to our knowledge – the first study to quantitatively assess at the individual level the often-assumed effect of government responsiveness on citizen engagement. It also describes an example of how the data provided through digital platforms may be leveraged to better understand participatory behavior. This is the fruit of a research collaboration with MySociety, to whom we are extremely thankful.

Below is the abstract:

What effect does bureaucratic responsiveness have on citizen participation? Since the 1940s, attitudinal measures of perceived efficacy have been used to explain participation. The authors develop a “calculus of participation” that incorporates objective efficacy—the extent to which an individual’s participation actually has an impact—and test the model against behavioral data from the online application Fix My Street (n = 399,364). A successful first experience using Fix My Street is associated with a 57 percent increase in the probability of an individual submitting a second report, and the experience of bureaucratic responsiveness to the first report submitted has predictive power over all future report submissions. The findings highlight the importance of responsiveness for fostering an active citizenry while demonstrating the value of incidentally collected data to examine participatory behavior at the individual level.

An earlier, ungated version of the paper can be found here.

The second paper, Does Online Voting Change the Outcome? Evidence from a Multi-mode Public Policy Referendum, has just been published in Electoral Studies. In an earlier JITP paper (ungated here) looking at Rio Grande do Sul State’s Participatory Budgeting – the world’s largest – we show that, when compared to offline voting, online voting tends to attract participants who are younger, male, of higher income and educational attainment, and more frequent social media users. Yet, one question remained: does the inclusion of new participants in the process with a different profile change the outcomes of the process (i.e. which projects are selected)? Below is the abstract of the paper.

Do online and offline voters differ in terms of policy preferences? The growth of Internet voting in recent years has opened up new channels of participation. Whether or not political outcomes change as a consequence of new modes of voting is an open question. Here we analyze all the votes cast both offline (n = 5.7 million) and online (n = 1.3 million) and compare the actual vote choices in a public policy referendum, the world’s largest participatory budgeting process, in Rio Grande do Sul in June 2014. In addition to examining aggregate outcomes, we also conducted two surveys to better understand the demographic profiles of who chooses to vote online and offline. We find that policy preferences of online and offline voters are no different, even though our data suggest important demographic differences between offline and online voters.

The extent to which these findings are transferable to other PB processes that combine online and offline voting remains an empirical question. In the meantime, nonetheless, these findings suggest a more nuanced view of the potential effects of digital channels as a supplementary means of engagement in participatory processes. I hope to share an ungated version of the paper in the coming days.

New IDS Journal – 9 Papers in Open Government

2016-01-14 16.51.09_resized

The new IDS Bulletin is out. Edited by Rosemary McGee and Duncan Edwards, this is the first open access version of the well-known journal by the Institute of Development Studies. It brings eight new studies looking at a variety of open government issues, ranging from uptake in digital platforms to government responsiveness in civic tech initiatives. Below is a brief presentation of this issue:

Open government and open data are new areas of research, advocacy and activism that have entered the governance field alongside the more established areas of transparency and accountability. In this IDS Bulletin, articles review recent scholarship to pinpoint contributions to more open, transparent, accountable and responsive governance via improved practice, projects and programmes in the context of the ideas, relationships, processes, behaviours, policy frameworks and aid funding practices of the last five years. They also discuss questions and weaknesses that limit the effectiveness and impact of this work, offer a series of definitions to help overcome conceptual ambiguities, and identify hype and euphemism. The contributions – by researchers and practitioners – approach contemporary challenges of achieving transparency, accountability and openness from a wide range of subject positions and professional and disciplinary angles. Together these articles give a sense of what has changed in this fast-moving field, and what has not – this IDS Bulletin is an invitation to all stakeholders to take stock and reflect.

The ambiguity around the ‘open’ in governance today might be helpful in that its very breadth brings in actors who would otherwise be unlikely adherents. But if the fuzzier idea of ‘open government’ or the allure of ‘open data’ displace the task of clear transparency, hard accountability and fairer distribution of power as what this is all about, then what started as an inspired movement of governance visionaries may end up merely putting a more open face on an unjust and unaccountable status quo.

Among others, the journal presents an abridged version of a paper by Jonathan Fox and myself on digital technologies and government responsiveness (for full version download here).

Below is a list of all the papers:

Rosie McGee, Duncan Edwards
Tiago Peixoto, Jonathan Fox
Katharina Welle, Jennifer Williams, Joseph Pearce
Miguel Loureiro, Aalia Cassim, Terence Darko, Lucas Katera, Nyambura Salome
Elizabeth Mills
Laura Neuman
David Calleb Otieno, Nathaniel Kabala, Patta Scott-Villiers, Gacheke Gachihi, Diana Muthoni Ndung’u
Christopher Wilson, Indra de Lanerolle
Emiliano Treré

 

World Development Report 2016: Digital Dividends

nationalgeographic_1746433-wblive (1)

The World Development Report 2016, the main annual publication of the World Bank, is out. This year’s theme is Digital Dividends, examining the role of digital technologies in the promotion of development outcomes. The findings of the WDR are simultaneously encouraging and sobering. Those skeptical of the role of digital technologies in development might be surprised by some of the results presented in the report. Technology advocates from across the spectrum (civic tech, open data, ICT4D) will inevitably come across some facts that should temper their enthusiasm.

While some may disagree with the findings, this Report is an impressive piece of work, spread across six chapters covering different aspects of digital technologies in development: 1) accelerating growth, 2) expanding opportunities, 3) delivering services, 4) sectoral policies, 5) national priorities, 6) global cooperation. My opinion may be biased, as somebody who made some modest contributions to the Report, but I believe that, to date, this is the most thorough effort to examine the effects of digital technologies on development outcomes. The full report can be downloaded here.

The report draws, among other things, from 14 background papers that were prepared by international experts and World Bank staff. These background papers serve as additional reading for those who would like to examine certain issues more closely, such as social media, net neutrality, and the cybersecurity agenda.

For those interested in citizen participation and civic tech, one of the papers written by Prof. Jonathan Fox and myself – When Does ICT-Enabled Citizen Voice Lead to Government Responsiveness? – might be of particular interest. Below is the abstract:

This paper reviews evidence on the use of 23 information and communication technology (ICT) platforms to project citizen voice to improve public service delivery. This meta-analysis focuses on empirical studies of initiatives in the global South, highlighting both citizen uptake (‘yelp’) and the degree to which public service providers respond to expressions of citizen voice (‘teeth’). The conceptual framework further distinguishes between two trajectories for ICT-enabled citizen voice: Upwards accountability occurs when users provide feedback directly to decision-makers in real time, allowing policy-makers and program managers to identify and address service delivery problems – but at their discretion. Downwards accountability, in contrast, occurs either through real time user feedback or less immediate forms of collective civic action that publicly call on service providers to become more accountable and depends less exclusively on decision-makers’ discretion about whether or not to act on the information provided. This distinction between the ways in which ICT platforms mediate the relationship between citizens and service providers allows for a precise analytical focus on how different dimensions of such platforms contribute to public sector responsiveness. These cases suggest that while ICT platforms have been relevant in increasing policymakers’ and senior managers’ capacity to respond, most of them have yet to influence their willingness to do so.

You can download the paper here.

Any feedback on our paper or models proposed (see below, for instance) would be extremely welcome.

unpacking

unpacking user feedback and civic action: difference and overlap

I also list below the links to all the background papers and their titles

Enjoy the reading.

Praising and Shaming in Civic Tech (or Reversed Nudging for Government Responsiveness) 

The other day during a talk with researcher Tanya Lokot I heard an interesting story from Russia. Disgusted with the state of their streets, activists started painting caricatures of government officials over potholes.

 

In the case of a central street in Saratov, the immediate response to one of these graffiti was this:  

 

Later on, following increased media attention – and some unexpected turnarounds – the pothole got fixed.

That reminded me of a recurrent theme in some conversations I have, which refers to whether praising and shaming matters to civic tech and, if so, to which extent. To stay with two classic examples, think of solutions such as FixMyStreet and SeeClickFix, through which citizens publically report problems to the authorities.

Considering government takes action, what prompts them to do so? At a very basic level, three hypothesis are possible:

1) Governments take action based on their access to distributed information about problems (which they supposedly are not aware of)

2) Governments take action due to the “naming and shaming” effect, avoiding to be publically perceived as unresponsive (and seeking praise for its actions)

3) Governments take action for both of the reasons above

Some could argue that hypothesis 3 is the most likely to be true, with some governments leaning more towards one reason to respond than others. Yet, the problem is that we know very little about these hypotheses, if anything. In other words – to my knowledge – we do not know whether making reports through these platforms public makes any difference whatsoever when it comes to governments’ responsiveness. Some might consider this as a useless academic exercise: as long as these tools work, who cares? But I would argue that the answer that questions matters a lot when it comes to the design of similar civic tech initiatives that aim to prompt government to action.

AAAFMSscreenshot

Let’s suppose that we find that all else equal governments are significantly more responsive to citizen reports when these are publically displayed. This would have importance both in terms of process and technological design. In terms of process, for instance, civic tech initiatives would probably be more successful if devoting part of their resources to amplify the visibility of government action and inaction (e.g. through local media). Conversely, from a technological standpoint, designers should devote substantive more effort on interfaces that maximizes praising and shaming of governments based on their performance (e.g. rankings, highlighting pending reports). Conversely, we might find that publicizing reports have very little effect in terms of responsiveness. In that case, more work would be needed to figure out which other factors – beyond will and capacity – play a role in government responsiveness (e.g. quality of reports).   

Most likely, praising and shaming would depend on a number of factors such as political competition, bureaucratic autonomy, and internal performance routines. But a finer understanding of that would not only bear an impact on the civic tech field, but across the whole accountability landscape. To date, we know very little about it. Yet, one of the untapped potential of civic technology is precisely that of conducting experiments at lowered costs. For instance, conducting randomized controlled trials on the effects on the publicization of government responsiveness should not be so complicated (e.g effects of rankings, amplifying visibility of unfixed problems). Add to that analysis of existing systems’ data from civic tech platforms, and some good qualitative work, and we might get a lot closer at figuring out what makes politicians and civil servants’ “tick”.

Until now, behavioral economics in public policy has been mainly about nudging citizens toward preferred choices. Yet it may be time to start also working in the opposite direction, nudging governments to be more responsive to citizens. Understanding whether praising and shaming works (and if so, how and to what extent) would be an important step in that direction.

***

Also re-posted on Civicist.

Over 40 Papers on Crowdsourcing for Politics & Policy

pic by James Cridland (flickr)

Today saw the beginning of the biennial conference on Internet, Politics and Policy, convened by the Oxford Internet Institute (University of Oxford) and OII-edited academic journal Policy and Internet. This year’s conference theme is Crowdsourcing for Politics and Policy. Skimming over some papers and abstracts,  here are some of my first (and rather superficial) impressions:

  • Despite the focus of the conference, there are few papers looking at an essential issue of crowdsourcing, namely its potential epistemic attributes. That is, when, why and how “the many are smarter than the few” and the role that technology plays in this.
  • In methodological terms, it seems that very little of the research presented takes advantage of the potential offered by ICT mediated processes when it comes to i) quantitative work with “administrative” data and ii) experimental research design.
  • On the issue of deliberation, it is good to see that more people are starting to look at design issues, slowly moving away from the traditional fixation on the Habermasian ideal (I’ve talked about this in a presentation here).
  • It seems that the majority of the papers focus on European experiences or those from other developed countries. At first, this is not surprising given the location of the conference and the resources that researchers from these countries have (e.g. travel budget). Yet, it may also suggest limited integration between North/South networks of researchers.

With regard to the last point above, it appears that there is a bridge yet to be built between the community of researchers represented by those attending this conference and the emerging community from the tech4accountability space. There’s lots of potential gain for both sides in engaging in a dialogue and, as importantly, a common language. The “Internet & Politics” community would benefit from the tech4accountability’s focus – although sometimes fuzzy – on development outcomes and experiences that emerge from the “South”. Conversely, the tech4accountability community would benefit a great deal by connecting with the existing (and clearly more mature) knowledge when it comes to the intersection of ICT, politics and citizen engagement.

Needless to say, all of the above are initial impressions and broad generalizations, and as such, may be unfair. The OII biennial conference remains, without a doubt, one of the major conferences in its field. You can view the full program of the conference here. I have also listed below in a simplified manner the links to the available papers of the conference according to their respective tracks.

Track A: Harnessing the Crowd

Experiments on Crowdsourcing Policy Assessmen

A Case Study in Modelling Government-Citizen Interaction in Facebook

The potential of Participedia as a crowdsourcing tool for comparative analysis of democratic innovations

Crowd Capital in Governance Contexts

Analyzing Crowd Discussion Towards a more complete model to measure and explain online deliberation

Predicting Events Using Learning Algorithms on Micro Blog Data

A Crowdsourcing Approach to Identify Common Method Bias and Self-Representation

Hate Speech, Machine Classification and Statistical Modelling of Information Flows on Twitter

Internet-mediated cooperative norm setting in the university

Monopsony and the Crowd: Labor for Lemons?

Online labour markets – leveling the playing field for international service markets?

TRACK B: Policy and Government

The Neo-Humanitarians: Assessing the Credibility of Organized Volunteer Crisis Mappers

Let The Users Be The Filter? Crowdsourced Filtering To Avoid Online Intermediary Liability

Regulating Distributed Peer-Production Infrastructures

Population as Auditor of an Election Process in Honduras: VotoSocial

Crowd-sourcing corruption: some challenges, some possible futures

Vertical crowdsourcing: The discourses of activity and the governance of crowds in emergency situations

TRACK C: Engaging the Crowd

What does crowdsourcing legislation entail for the participants? The Finnish case of Avoin Ministeriö

Let the crowd decide? Crowdsourcing ideas as an emerging form of multistakeholder participation

The question of technologically mediated civic political participation reformulated

Discussing Germany’s Future: The Evaluation of Federal Online Citizen Participation

Reprogramming power through crowdsourcing: time, space and citizenship in crowdsourcing for law in Finland

Crowdsourcing as Reflective Political Practice: Building a Location-based Tool for Civic Learning and Engagement

Civic crowdfunding as a marketplace for participation in urban development

Voices in the Noise: Crowdsourcing Public Opinion using Urban Pervasive Technologies