At the intersection of participation and technology. By Tiago C. Peixoto. Opinions are my own and do not reflect those of any institutions with which I am or have been affiliated.
The other day during a talk with researcher Tanya Lokot I heard an interesting story from Russia. Disgusted with the state of their streets, activists started painting caricatures of government officials over potholes.
In the case of a central street in Saratov, the immediate response to one of these graffiti was this:
Later on, following increased media attention – and some unexpected turnarounds – the pothole got fixed.
That reminded me of a recurrent theme in some conversations I have, which refers to whether praising and shaming matters to civic tech and, if so, to which extent. To stay with two classic examples, think of solutions such as FixMyStreet and SeeClickFix, through which citizens publically report problems to the authorities.
Considering government takes action, what prompts them to do so? At a very basic level, three hypothesis are possible:
1) Governments take action based on their access to distributed information about problems (which they supposedly are not aware of)
2) Governments take action due to the “naming and shaming” effect, avoiding to be publically perceived as unresponsive (and seeking praise for its actions)
3) Governments take action for both of the reasons above
Some could argue that hypothesis 3 is the most likely to be true, with some governments leaning more towards one reason to respond than others. Yet, the problem is that we know very little about these hypotheses, if anything. In other words – to my knowledge – we do not know whether making reports through these platforms public makes any difference whatsoever when it comes to governments’ responsiveness. Some might consider this as a useless academic exercise: as long as these tools work, who cares? But I would argue that the answer that questions matters a lot when it comes to the design of similar civic tech initiatives that aim to prompt government to action.
Let’s suppose that we find that all else equal governments are significantly more responsive to citizen reports when these are publically displayed. This would have importance both in terms of process and technological design. In terms of process, for instance, civic tech initiatives would probably be more successful if devoting part of their resources to amplify the visibility of government action and inaction (e.g. through local media). Conversely, from a technological standpoint, designers should devote substantive more effort on interfaces that maximizes praising and shaming of governments based on their performance (e.g. rankings, highlighting pending reports). Conversely, we might find that publicizing reports have very little effect in terms of responsiveness. In that case, more work would be needed to figure out which other factors – beyond will and capacity – play a role in government responsiveness (e.g. quality of reports).
Most likely, praising and shaming would depend on a number of factors such as political competition, bureaucratic autonomy, and internal performance routines. But a finer understanding of that would not only bear an impact on the civic tech field, but across the whole accountability landscape. To date, we know very little about it. Yet, one of the untapped potential of civic technology is precisely that of conducting experiments at lowered costs. For instance, conducting randomized controlled trials on the effects on the publicization of government responsiveness should not be so complicated (e.g effects of rankings, amplifying visibility of unfixed problems). Add to that analysis of existing systems’ data from civic tech platforms, and some good qualitative work, and we might get a lot closer at figuring out what makes politicians and civil servants’ “tick”.
Until now, behavioral economics in public policy has been mainly about nudging citizens toward preferred choices. Yet it may be time to start also working in the opposite direction, nudging governments to be more responsive to citizens. Understanding whether praising and shaming works (and if so, how and to what extent) would be an important step in that direction.
Rio Grande do Sul Participatory Budgeting Voting System (2014)
Within the open government debate, there is growing interest in the role of technology in citizen engagement. However, as interest in the subject grows, so does the superficiality of the conversations that follow. While the number of citizen engagement and technology events is increasing, the opportunities for in-depth conversations on the subject do not seem to be increasing at the same rate.
This is why, a few weeks ago, I was pleased to visit the University of Westminster for a kick-off talk on “Technology and Participation: Friend or Foe?”, organized by Involve and the Centre for the Study of Democracy (Westminster). It was a pleasure to start a conversation with a group that was willing to engage in a longer and more detailed conversation on the subject.
My talk covered a number of issues that have been keeping me busy recently. On the preliminary quantitative work that I presented, credit should also go to the awesome team that I am working with, which includes Fredrik Sjoberg (NYU), Jonathan Mellon (Oxford) and Paolo Spada (UBC / Harvard). For those who would like to see some of the graphs better, I have also added here [PDF] the slides of my presentation.
I have skipped the video to the beginning of my talk, but the discussion that followed is what made the event interesting. In my opinion, the contributions of Maria Nyberg (Head of Open Policy Making at the Cabinet Office) Catherine Howe (Public-i), as well as those of the participants, were a breath of fresh air in the current citizen engagement conversation. So please bear with me and watch until the end.
I would like to thank Simon Burral (Involve) and Graham Smith (Westminster) for their invitation. Simon leads the great work being done at Involve, one of the best organizations working on citizen engagement nowadays. And to keep it short, Graham is the leading thinker when the issue is democratic innovations.
Below is also an excellent summary by Sonia Bussu (Involve), capturing some of the main points of my talk and the discussion that ensued (originally posted here).
***
“On technology and democracy
The title of yesterday’s event, organised by Involve and Westminster University’s Centre for the Study of Democracy, posed a big question, which inevitably led to several other big questions, as the discussion among a lively audience of practitioners, academics and policymakers unfolded (offline and online).
Tiago Peixoto, from the World Bank, kicked off the debate and immediately put the enthusiasm for new technologies into perspective. Back in 1795, the very first model of the telegraph, the Napoleonic semaphore, raised hopes for – and fears of – greater citizen engagement in government. Similarly the invention of the TV sparked debates on whether technology would strengthen or weaken democracy, increasing citizen awareness or creating more opportunities for market and government manipulation of public opinion.
Throughout history, technological developments have marked societal changes, but has technological innovation translated into better democracy? What makes us excited today about technology and participation is the idea that by lowering the transaction costs we can increase people’s incentives to participate. Tiago argued that this costs-benefits rationale doesn’t explain why people continue to vote, since the odds of their vote making a difference are infinitesimal (to be fair voter turnouts are decreasing across most advanced democracies – although this is more a consequence of people’s increasing cynicism towards political elites rather than their understanding of mathematical probabilities).*
So do new technologies mobilise more people or simply normalise the participation of those that already participate? The findings on the matter are still conflicting. Tiago showed us some data on online voting in Rio Grande do Sul participatory budgeting process in Brazil, whereby e-voting would seem to bring in new voters (supporting the mobilisation hypothesis) but from the same social strata (e.g. higher income and education – as per the normalisation hypothesis).
In short, we’re still pretty much confused about the impact of technology on democracy and participation. Perhaps, as suggested by Tiago and Catherine Howe from Public-i, the problem is that we’re focusing too much on technology, tempted by the illusion it offers to simplify and make democracy easy. But the real issue lies elsewhere, in understanding people and policymakers’ incentives and the articulation (or lack thereof) between technologies and democratic institutions. As emphasised by Catherine, technology without democratic evolution is like “lipstick on a pig”.
The gap between institutions and technology is still a big obstacle. Catherine reminded us how participation often continues to translate into one-way communication in government’s engagement strategies, which constrains the potential of new technologies in facilitating greater interaction between citizens and institutions and coproduction of policies as a response to increasing complexity. As academics and practitioners pitch the benefits of meaningful participation to policy makers, Tiago asked whether a focus on instrumental incentives might help us move forward. Rather than always pointing to the normative argument of deepening democracy, we could start using data from cases of participatory budgeting to show how greater participation reduces tax evasion and corruption as well as infant mortality.
He also made a methodological point: we might need to start using more effectively the vast array of data on existing engagement platforms to understand incentives to participation and people’s motivation. We might get some surprises, as findings demystify old myths. Data from Fix My Street would seem to prove that government response to issues raised doesn’t increase the likelihood of future participation by as much as we would assume (28%).** But this is probably a more complicated story, and as pointed out by some people in the audience the nature and salience of both the issue and the response will make a crucial difference.
Catherine highlighted one key problem: when we talk about technology, we continue to get stuck on the application layer, but we really need to be looking at the architecture layer. A democratic push for government legislation over the architecture layer is crucial for preserving the Internet as a neutral space where deeper democracy can develop. Data is a big part of the architecture and there is little democratic control over it. An understanding of a virtual identity model that can help us protect and control our data is key for a genuinely democratic Internet.
Maria Nyberg, from the Cabinet Office, was very clear that technology is neither friend nor foe: like everything, it really depends on how we use it. Technology is all around us and can’t be peripheral to policy making. It offers great opportunities to civil servants as they can tap into data and resources they didn’t have access to before. There is a recognition from government that it doesn’t have the monopoly on solutions and doesn’t always know best. The call is for more open policy making, engaging in a more creative and collaborative manner. Technology can allow for better and faster engagement with people, but there is no silver bullet.
Some people in the audience felt that the drive for online democracy should be citizen-led, as the internet could become the equivalent of a “bloodless guillotine” for politicians. But without net neutrality and citizen control over our own data there might be little space for genuine participation.
*This point was edited on 12/07/2014 following a conversation with Tiago.
** This point was edited on 12/07/2014 following a conversation with Tiago.”
—————————
I am also thankful to the UK Political Studies Association (PSA), Involve and the University of Westminster for co-sponsoring my travel to the UK. I will write more later on about the Scaling and Innovation Conference organized by the PSA, where I was honored to be one of the keynote speakers along with MP Chi Onwurah (Shadow Cabinet Office Minister) and Professor Stephen Coleman (Leeds).
Having a refined understanding of what leads people to participate is one of the main concerns of those working with citizen engagement. But particularly when it comes to participatory democracy, that understanding is only partial and, most often, the cliché “more research is needed” is definitely applicable. This is so for a number of reasons, four of which are worth noting here.
The “participatory” label is applied to greatly varied initiatives, raising obvious methodological challenges for comparative research and cumulative learning. For instance, while both participatory budgeting and online petitions can be roughly categorized as “participatory” processes, they are entirely different in terms of fundamental aspects such as their goals, institutional design and expected impact on decision-making.
The fact that many participatory initiatives are conceived as “pilots” or one-off events gives researchers little time to understand the phenomenon, come up with sound research questions, and test different hypotheses over time. The “pilotitis” syndrome in the tech4accountability space is a good example of this.
When designing and implementing participatory processes, in the face of budget constraints the first victims are documentation, evaluation and research. Apart from a few exceptions, this leads to a scarcity of data and basic information that undermines even the most heroic “archaeological” efforts of retrospective research and evaluation (a far from ideal approach).
The semantic extravaganza that currently plagues the field of citizen engagement, technology and open government makes cumulative learning all the more difficult.
Precisely for the opposite reasons, our knowledge of electoral participation is in better shape. First, despite the differences between elections, comparative work is relatively easy, which is attested by the high number of cross-country studies in the field. Second, the fact that elections (for the most part) are repeated regularly and following a similar design enables the refinement of hypotheses and research questions over time, and specific time-related analysis (see an example here [PDF]). Third, when compared to the funds allocated to research in participatory initiatives, the relative amount of resources channeled into electoral studies and voting behavior is significantly higher. Here I am not referring to academic work only but also to the substantial resources invested by the private sector and parties towards a better understanding of elections and voting behavior. This includes a growing body of knowledge generated by get-out-the-vote (GOTV) research, with fascinating experimental evidence from interventions that seek to increase participation in elections (e.g. door-to-door campaigns, telemarketing, e-mail). Add to that the wealth of electoral data that is available worldwide (in machine-readable formats) and you have some pretty good knowledge to tap into. Finally, both conceptually and terminologically, the field of electoral studies is much more consistent than the field of citizen engagement which, in the long run, tends to drastically impact how knowledge of a subject evolves.
These reasons should be sufficient to capture the interest of those who work with citizen engagement. While the extent to which the knowledge from the field of electoral participation can be transferred to non-electoral participation remains an open question, it should at least provide citizen engagement researchers with cues and insights that are very much worth considering.
“In these [rationalist] models citizens are seen as weighing the anticipated trouble they must go through in order to cast their votes, against the likelihood that their vote will improve the outcome of an election times the magnitude of that improvement. Of course, these models are problematic because the likelihood of casting in the deciding vote is often hopelessly small. In a typical state or national election, a person faces a higher probability of being struck by a car on the way to his or her polling location than of casting the deciding vote.”
(BTW, if you are a voter in certain US states, the odds of being hit by a meteorite are greater than those of casting the deciding vote).
Following on from the fact that traditional models cannot fully explain why and under which conditions citizens vote, the authors develop a framework that considers voting as a “self-expressive voting behavior that is influenced by events occurring before and after the actual moment of casting a vote.” To support their claims, throughout the article the authors build upon existing evidence from GOTV campaigns and other behavioral research. Besides providing a solid overview of the literature in the field, the authors express compelling arguments for mobilizing electoral participation. Below are a few excerpts from the article with some of the main takeaways:
Mode of contact: the more personal it is, the more effective it is
“Initial experimental research found that a nonpartisan face-to-face canvassing effort had a 5-8 percentage point mobilizing effect in an uncontested midterm elections in 1998 (Gerber and Green 2000) compared to less than a 1 percentage point mobilizing effect for live phone calls and mailings. More than three dozen subsequent experiments have overwhelmingly supported the original finding (…)”
“Dozens of experiments have examined the effectiveness of GOTV messages delivered by the telephone. Several general findings emerge, all of which are consistent with the broad conclusion that the more personal a GOTV strategy, the more effective. (…) the most effective calls are conducted in an unhurried, “chatty manner.”
“The least personal and the least effective GOTV communication channels entail one way communications. (…) written pieces encouraging people vote that are mailed directly to households have consistently been shown to produce a mall, but positive, increase in turnout.”
Voting is affected by events before and after the decision
“One means to facilitate the performance of a socially desirable behavior is to ask people to predict whether they will perform the behavior in the future. In order to present oneself in a favorable light or because of wishful thinking or both, people are generally biased to answer in the affirmative. Moreover, a number of studies have found that people are more likely to follow through on a behavior after they predicted that they will do so (….) Emerging social-networking technologies provide new opportunities for citizens to commit to each other that they will turnout in a given election. These tools facilitate making one’s commitments public, and they also allow for subsequently accountability following an election (…) Asking people to form a specific if-then plan of action, or implementation intention, reduces the cognitive costs of having to remember to pursue an action that one intends to perform. Research shows that when people articulate the how, when and where of their plan to implement an intended behavior, they are more likely to follow through.”
(Not coincidentally, as noted by Sasha Issenberg in his book The Victory Lab, during the 2010 US presidential election millions of democrats received an email reminding them that they had “made a commitment to vote in this election” and that “the time has come to make good on that commitment. Think about when you’ll cast your vote and how you’ll get there.”)
“ (…) holding a person publicly accountable for whether or not she voted may increase her tendency to do so. (…) Studies have found that when people are merely made aware that their behavior will be publicly known, they become more likely to behaving in ways that are consistent with how they believe others think they should behave. (…) At least, at one point Italy exposed those who failed to vote by posting the names of nonvoters outside of local town halls.”
(On the accountability issue, also read this fascinating study [PDF] by Gerber, Green & Larimer)
Following the herd: affinitive and belonging needs
“People are strongly motivated to maintain feelings of belonging with others and to affiliate with others. (…) Other GOTV strategies that can increase turnout by serving social needs could involve encouraging people to go to their polling place in groups (i.e., a buddy system), hosting after-voting parties on election day, or encouraging people to talk about voting with their friends, to name a few.”
“(…) studies showed that the motivation to vote significantly increased when participants heard a message that emphasized high expected turnout as opposed to low expected turnout. For example, in the New Jersey study, 77% of the participants who heard the high-turnout script reported being “absolutely certain” that they would vote, compared to 71% of those who heard the low-turnout script. This research also found that moderate and infrequent voters were strongly affected by the turnout information.”
Voting as an expression of identity
“(….) citizens can derive value from voting through what the act displays about their identities. People are willing to go to great lengths, and pay great costs, to express that they are a particular kind of person. (….) Experimenters asked participants to complete a fifteen-minute survey that related to an election that was to occur the following week. After completing the survey, the experimenter reviewed the results and reported to participants what their responses indicated. Participants were, in fact, randomly assigned to one of two conditions. Participants in the first condition were labeled as being “above-average citizens[s] … who [are] very likely to vote,” whereas participants in the second condition were labeled as being “average citizen[s] … with an average likelihood of voting. (….) These identity labels proved to have substantial impact on turnout, with 87% of “above average” participants voting versus 75% of “average” participants voting.”
For those working with participatory governance, the question that remains is the extent to which each of these lessons is applicable to non-electoral forms of participation. The differences between electoral and non-electoral forms of participation may cause these techniques to generate very different results. One difference relates to public awareness about participation opportunities. While it would be safe to say that during an important election the majority of citizens are aware of it, the opposite is true for most existing participatory events, where generally only a minority is aware of their existence. In this case, it is unclear whether the impact of mobilization campaigns would be more or less significant when awareness about an event is low. Furthermore, if the act of voting may be automatically linked to a sense of civic duty, would that still hold true for less typical forms of participation (e.g. signing an online petition, attending a community meeting)?
The answer to this “transferability” question is an empirical one, and one that is yet to be answered. The good news is that while experiments that generate this kind of knowledge are normally resource intensive, the costs of experimentation are driven down when it comes to technology-mediated citizen participation. The use of A/B testing during the Obama campaign is a good example. Below is an excellent account by Dan Siroker on how they conducted online experiments during the presidential campaign.
Bringing similar experiments to other realms of digital participation is the next logical step for those working in the field. Some organizations have already started to take this seriously . The issue is whether others, including governments and donors, will do the same.
Discussions about incentives to participate are increasingly common, but they are as shallow as most conversations nowadays about the subject of “feedback loops”. And very little reflection is actually dedicated to questions such as why, when and how people participate.
This is why this talk by Judd Antin, User Experience Researcher at Facebook, is one of the best I’ve heard lately. He goes a step further than making commonsensical assumptions, and examines the issue of motivations to participate in a more critical and systematic manner. When it comes to technology mediated processes, Judd is actually one of the few people looking seriously at the issue of incentives/motivations to participate.
In the talk Judd begins by arguing that “(…) the foundations of motivation in the age of social media, they are kind of the same as the foundations of motivation before the age of social media.” I cannot help but agree and sympathize with the statement. It is particularly annoying to hear on a daily basis claims suggesting that individual and social processes are fundamentally altered by technologies, and “how new” this field is. “I don’t fool myself into thinking that this is a brand new world”, remarks Judd. Too bad so many are fooling themselves these days.
Judd’s take on incentives to participate is particularly sobering for some cheerleaders of gamification, highlighting the limits of instrumental rewards and the need to focus on issues such as group identification, efficacy and – importantly – simplicity.
Finally, and on a more anecdotal note, it is interesting to see how some issues are similar across different spaces. At some point Judd points out that the “dislike” button is one of the features most requested by Facebook users. In a similar vein, one of the most requested features for e-Petitions platforms is the possibility to sign “against” a petition.
If I can vote for an e-petition, why can't I vote against it. Net support more significant than gross support?
In both cases, these requests have been largely ignored. My feeling is that the implications for these choices of design for collective action are far from neutral, and these are issues that we should be looking at more closely.
In any case, Judd’s talk is great, and so are his articles: you can find a list of his most recent ones below.