High-Frequency Data Using Mobile Phones: Incentives and Accountability

Great research note  [PDF] by Croke et al. (2012). Here’s the abstract:

As mobile phone ownership rates have risen dramatically in Africa, there has been increased interest in using mobile telephones as a data collection platform. This note draws on two largely successful pilot projects in Tanzania and South Sudan that used mobile phones for high-frequency data collection. Data were collected on a wide range of topics and in a manner that was cost-effective, flexible, and rapid. Once households were included in the survey, they tended to stick with it: respondent fatigue has not been a major issue. While attrition and nonresponse have been challenges in the Tanzania survey, these were due to design flaws in that particular survey, challenges that can be avoided in future similar projects. Ensuring use of the data to demand better service delivery and policy decisions turned out to be as challenging as collecting the high-quality data. Experiences in Tanzania suggest that good data can be translated into public accountability, but also demonstrate that just putting data out in the public domain is not enough. This note discusses lessons learned and offers suggestions for future applications of mobile phone surveys in developing countries, such as those planned for the World Bank’s “Listening to Africa” initiative.

Of particular interest to me is the fact that part of the design used financial incentives as a means to reduce nonresponse and attrition rates. In the technology and development world there has been lots of talk about “incentives to participate”, where the practical shortcut is often the provision of financial incentives. In Tanzania, for instance, the authors report that “respondents who successfully completed an interview were rewarded with an amount varying from $2 to $4″, not a negligible sum in the Tanzanian context.

Interestingly, in the working paper [PDF] from which this note is drawn, a footnote sheds some light on how effective these rewards were:

Remarkably in both Sudan and Tanzania the amount of the reward did not have a discernable impact on response rates.

But these findings are not as surprising as they may seem. Indeed, there is a good deal of evidence from behavioural economics pointing out that financial incentives might not work as well as traditional economics (and economists) would predict.

And a noteworthy excerpt on the limits of transparency and the role of existing institutions and actors:

One lesson is that  providing citizens with relevant, timely, and accurate data  about the actions of politicians, policy makers, and public service providers is not sufficient. For the data to have impact, they need to be accessible and disseminated widely, and in ways that allow them to be utilized by already existing institutions and actors.

This is an interesting point, although I am not sure to what extent existing institutions are enough. In the field of technology and governance, I believe that it has become quite clear that very little is achieved when technological solutions are not coupled with institutional innovations.

But that’s another story. In any case, a great read, and the type of effort that is badly needed in this space.