Below is a selection of the 10 most read posts at DemocracySpot in 2013. Thanks to all of those who stopped by throughout the year, and happy 2014.
Few movies have captured the imagination of scholars as well as 12 Angry Men, where a jury composed of 12 men has to deliberate on the fate of a Puerto Rican accused of murder. For instance, when I researched the literature about the movie a few years back, I found out that on the 50th anniversary of the movie, an entire edition of the Chicago-Kent Law Review was dedicated to the movie. In its opening article, Law Professor Nancy Marder explains why:
“The movie was, and remains, an anomaly in the annals of jury movies. Whereas most movies with a jury show the jurors a silent, brooding presence whose main job is to observe on the jurors and their deliberations (…). The jurors in 12 Angry Men are the focus of the movie, and they are a loud, active bunch of men whose deliberations are fraught with conflict. Indeed, the dynamic of this group deliberation constitutes the drama of this movie.”
I couldn’t agree more with Professor Marder. But it is not just the dimension of the jury, as a trial institution, that has led the movie to captivate so many scholars. A number of academics interested in group dynamics, deliberation and collective intelligence often use the movie as a reference when illustrating the peculiarities of deliberative processes. Cass Sunstein, for instance, wrote an article [PDF] looking at the issue of group polarization, arguing why – in accordance with his take on the issue – the movie seems to defy the logics of deliberation. Conversely, Hélène Landemore [PDF], building on previous work by Scott Page, uses 12 Angry Men to highlight how diversity enables groups to reach a better decision.
But I will not go into too much detail because, if you haven’t watched the movie yet (starring Henry Fonda as Juror #8), it is a must see.
A paper by Tanja Aitamurto (Tampere) and Hélène Landemore (Stanford) on an interesting crowdsourcing exercise in Finland.
This paper reports on a pioneering case study of a legislative process open to the direct online participation of the public. The empirical context of the study is a crowdsourced off-road traffic law in Finland. On the basis of our analysis of the user content generated to date and a series of interviews with key participants, we argue that the process qualifies as a promising case of deliberation on a mass-scale. This case study will make an important contribution to the understanding of online methods for participatory and deliberative democracy. The preliminary findings indicate that there is deliberation in the crowdsourcing process, which occurs organically (to a certain degree) among the participants, despite the lack of incentives for it. Second, the findings strongly indicate that there is a strong educative element in crowdsourced lawmaking process, as the participants share information and learn from each other. The peer-learning aspect could be made even stronger through the addition of design elements in the process and on the crowdsourcing software.
The first two things that come to mind when reading this, are:
- If there is a “strong educative element” in the crowdsourcing process, we have an argument for large-scale citizen participation. The more citizens take part in a process, the more citizens benefit from the educative element.
- If we consider point 1 to be true, there is still a major technical challenge in terms of having appropriate platforms to enable large-scale deliberative processes. For instance, I have some reservations about crowdsourcing efforts that use ideation systems like Ideascale (as is the case for this experience). In my opinion such systems are prone to information cascades and a series of other biases that compromise an exercise in terms of a) deliberative quality and b) final outcomes (i.e. quality of ideas).
There’s still lots to learn on that front, and there is a dire need for more research of this type. Kudos should also go to the proponents of the initiative, who involved the authors in the project from the start.
Read the full paper here [PDF].
A little while ago I listed a few of my favorite readings and videos about collective intelligence. But since then I have been extremely bothered by the fact that I forgot to include in the list some references to Scott Page’s work. In my opinion Scott is one of the most important references for anyone interested in subjects such as collective intelligence, epistemic democracy, crowdsourcing, prediction models,and group performance. For instance, his book “The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies” is one of the best readings I’ve recently come across in the field.
It should not surprise anyone that some of the smartest people currently working on collective intelligence do not hesitate to cite Scott’s work over and over again in their writings.
As Scott highlights the importance of cognitive diversity for collective problem-solving (where diversity trumps ability), he ends up indirectly providing convincing arguments as to why – under certain conditions – citizens may outperform elected officials and experts. Scott’s work thus becomes compulsory reading for those working with citizen participation.
So I tried to compile a small list of freely available resources for those with an interest in any of the issues mentioned above:
- Virginia University Lecture
- UCSD Lecture
( more recent talk, which includes a great account on the role of diversity in the Netflix algorithm competition)
- Coursera lecture on the Diversity Prediction Theorem
When designing citizen engagement mechanisms I always consider sortition (or randomization) as a mechanism of participant selection. Nevertheless, and particularly in the #opengov space, my experience is that this idea does not resonate a lot: it sounds less sexy than crowdsourcing and more complicated than over-simplistic mechanisms of “civil society engagement”.
This is why it is always great to see someone like Lawrence Lessig putting forward a system of “Citizen Conventions” for proposing amendments to the Constitution based upon sortition. In this video below, at a hearing at the U.S. Senate’s Commission of Justice, Lessig explains in a few seconds how such a system would work:
With his unique eloquence, Lessig also makes the best case for ordinary citizens to engage with the Constitution and reforms:
I think to the surprise of many people, you would see that ordinary people deliberating about what the Constitution needs and how the reforms should go forward, would far surpass ninety eight percent of what is commonly discussed in this particular context. And that’s because, frankly, politics is the one sport where the amateur is better for the nation than the professional.
Lessig’s remark on the amateur’s role in politics reminds me of something I read a while ago from the apologue of Protagoras. When charged with taking to humans the art of politics, Mercury asks Jupiter whether it should be distributed like the other arts, to the competent ones only. Jupiter replies that the art of politics should be distributed to all. Otherwise, says Jupiter, the city would not exist.
The Oregon Citizen’s Initiative Review is without doubt one of the most interesting recent innovations in the field of citizen engagement.
Here’s an excerpt from Participedia on the initiative:
The Oregon Citizen Initiative Review (Oregon CIR) is a Citizens’ Initiative Review designed to allow citizens of the U.S. state of Oregon to evaluate statewide ballot initiatives. A Citizens’ Initiative Review (CIR) is a Citizens’ Jury that deliberates about a ballot initiative. The Oregon CIR is intended to give voters clear, useful, and trustworthy evaluations of initiatives on the ballot. (…)
The Oregon CIR involves four categories of participants: panelists, the citizens who deliberate about a ballot initiative; advocates, individuals who are knowledgeable about the ballot initiative and who argue in support of or in opposition to the ballot initiative; stakeholders, individuals who will be affected by the ballot initiative, who also argue in support of or in opposition to the ballot initiative; and background witnesses, individuals who are knowledgeable about issues related to the initiative, and who present neutral background information about those issues to the panelists. (…)
I have written before about different methods of participant selection, and this is one of the strong points of this initiative:
To select the panelists for the 2010 Oregon CIRs, HDO used the following selection process: HDO took a probability sample of 10,000 Oregon voters. All voters in this sample were sent an invitation to participate in the 2010 Oregon CIR and a demographic survey. Three hundred fifty members of the sample responded, for a response rate of 3.5%. From those who responded, HDO, using the demographic data from the sample survey, anonymously chose 24 panelists, and 5 alternate panelists, for each 2010 Oregon CIR. The panelists and alternates for each CIR were chosen using stratification, so that each panel closely matched the Oregon population in terms of place of residence, political partisanship, education, ethnicity/race, gender, and age.
If advocates and policymakers in the open government space are really serious about citizen engagement, this is the sort of institutional innovation they should be looking at. Unfortunately, this doesn’t seem to be happening.
Find out more about it at http://healthydemocracy.org/
Tom Steinberg asked me for a list of my favorite recent reads. So here’s the first part of a rather disorganized list of readings and other resources, with sporadic comments on why I like some of them. The list is heterogeneous in terms of subject, method and quality. In my opinion, the common denominator among the different resources is their relevance for those working at the intersection of participation and technology.
ON COLLECTIVE INTELLIGENCE
There is definitely a lot of bad reading out there about collective intelligence. Indeed, many of the discussions and papers out there are nothing more than half-baked re-readings of ideas and concepts well established in the field of epistemic democracy. But there are a few exceptions. Acquainting myself with Hélène’s awesome work in the domain was one of the highlights for me in 2012. Here’s a sample:
Landemore, Hélène E., Democratic Reason: The Mechanisms of Collective Intelligence in Politics (April 1, 2011). COLLECTIVE WISDOM: PRINCIPLES AND MECHANISMS, Hélène Landemore and Jon Elster, eds., Cambridge University Press, Spring 2012.
You can find more of Hélène’s work here http://www.helenelandemore.com/.
Also, if you are interested in high-level talks and discussions about collective intelligence, the videos of conferences below are some of the best things out there:
ON COLLECTIVE ACTION
Bond, R. M., C. J. Fariss, J. J. Jones, A. D. I. Kramer, C. Marlow, J. E. Settle, and J. H. Fowler. 2012. “A 61-Million-Person Experiment in Social Influence and Political Mobilization.” Nature 489: 295–298.
Margetts, Helen Zerlina, John, Peter, Reissfelder, Stephane and Hale, Scott A., Social Influence and Collective Action: An Experiment Investigating the Effects of Visibility and Social Information Moderated by Personality (April 18, 2012).
David Lazer is the co-author of two of these papers. If you don’t know it already, Stuart Shulman’s work is definitely worth checking out. Thamy Pogrebinschi is probably one of the people to look out for in the coming years in the field of participatory democracy.
THE ROI OF CITIZEN ENGAGEMENT:
Largely unknown even among the most enthusiastic participation advocates, there is a growing body of literature in the field of tax morale that links citizen engagement to reduced tax evasion: one of the best cases for the ROI of Open Government. Below is one of the best papers in the field.
And if the subject is the ROI of open government, here’s a paper that links participatory budgeting to reduced infant mortality (and there’s more to be published on that front soon).
RANDOMIZED CONTROLLED TRIALS AND OPEN GOVERNMENT
If I were to make any predictions for 2013, I would say we will start to see a growing number of studies using randomized controlled trials (RCTs) to assess the validity of claims for transparency and participation. Indeed, some donors in the open government space have already started to ask for RCT evaluations as a project component. Here are a couple of examples of how good studies on the subject would look (IMHO):
Of course, scholars, practitioners and donors should take claims about the awesomeness of RCTs with a good grain of salt (and pepper):
FUN STUFF ON TURNOUT AND ELECTIONS
Gomez, Brad T., Thomas G. Hansford, and George A. Krause. 2007. “The Republicans Should Pray for Rain: Weather, Turnout, and Voting in U.S. Presidential Elections.” Journal of Politics 69 (August): 649–63.
This is just the first part of a longer list. I hope to finish a second part soon, focusing – among other things – on the (uneasy) intersection of behavioural economics and participatory democracy.
Interesting video (and transcripts) of Thomas Malone (MIT Center for Collective Intelligence) at Edge (highlights are mine):
If it’s not just putting a bunch of smart people in a group that makes the group smart, what is it? We looked at bunch of factors you might have thought would affect it: things like the psychological safety of the group, the personality of the group members, et cetera. Most of the things we thought might have affected it turned out not to have any significant effect. But we did find three factors that were significantly correlated with the collective intelligence of the group.
The first was the average social perceptiveness of the group members. We measured social perceptiveness in this case using a test developed essentially to measure autism. It’s called the “Reading the Mind and the Eyes Test”. It works by letting people look at pictures of other people’s eyes and try to guess what emotions those people are feeling. People who are good at that work well in groups. When you have a group with a bunch of people like that, the group as a whole is more intelligent.
The second factor we found was the evenness of conversational turn taking. In other words, groups where one person dominated the conversation were, on average, less intelligent than groups where the speaking was more evenly distributed among the different group members.
Finally, and most surprisingly to us, we found that the collective intelligence of the group was significantly correlated with the percentage of women in the group. More women were correlated with a more intelligent group. Interestingly, this last result is not just a diversity result. It’s not just saying that you need groups with some men and some women. It looks like that it’s a more or less linear trend. That is, more women are better all the way up to all women. It is also important to realize that this gender effect is largely statistically mediated by the social perceptiveness effect. In other words, it was known before we did our work that women on average scored higher on this measure of social perceptiveness than men.
This is the interpretation I personally prefer: it may be that what’s needed to have an intelligent group is just to have a bunch of people in the group who are high on this social perceptiveness measure, whether those people are men or women. In any case, we think it’s an interesting finding, one that we hope to understand better and one that already has some very intriguing implications for how we create groups in many cases in the real world.
You can watch the video here.