Crowd Wisdom

I just posted the article about Ushahidi and its future challenges that was published in the Index on Censorship last month (‘Crowd Wisdom’ by Heather Ford in Index on Censorship December 2012, vol. 41, no. 4 33-39 doi: 10.1177/0306422012465800) . I wrote about Ushahidi’s emergence as a powerful tool used in countries around the world to document elections, disasters and food – among others – and the coming challenges as the majority of Ushahidi implementations remain ‘small data’ projects and as tools move towards automatic verification, something only possible with ‘Big Data’.

Can Ushahidi Rely on Crowdsourced Verifications?

First published on PBS Idea Lab

During the aftermath of the Chilean earthquake last year, the Ushahidi-Chile team received two reports — one through the platform, the other via Twitter — that indicated an English-speaking foreigner was trapped under a building in Santiago.

“Please send help,” the report read. “i am buried under rubble in my home at Lautaro 1712 Estación Central, Santiago, Chile. My phone doesnt work.”

A few hours later, a second, similar report was sent to the platform via Twitter: “RT @biodome10: plz send help to 1712 estacion central, santiago chile. im stuck under a building with my child. #hitsunami #chile we have no supplies.”

earthquake.jpg

An investigation a few days later revealed that both reports were false and that the Twitter user was impersonating a journalist working for the Dallas Morning News. But this revelation was not in time to stop two police deployments in Santiago that leaped to the rescue before they realized that the area had not been affected by the quake and that the couple living there was alive and well.

Is false information like this one just a necessary by-product of “crowdsourced” environments like Ushahidi? Or do we need to do more to help deployment teams, emergency personnel and users better assess the accuracy of reports hosted on our platform?

Ushahidi is a non-profit tech company that develops free and open-source software for information collection, visualization and interactive mapping. We’ve just published an initial study of how Ushahidi deployment teams manage and understand verification on the platform. Doing this research has surfaced a couple of key challenges about the way that verification currently works, as well as a few easy wins that might add some flexibility into the system. It’s also revealed some questions as we look to improve the platform’s ability to do verification on large quantities of data in the future.

What We’ve Learned

We’ve learned that we need to add more flexibility into the system, enabling deployment teams to choose whether they want to use the “verified” and “unverified” tagging functionality or not. We’ve learned that the binary terms we’re currently using don’t capture other attributes of reports that are necessary to establishing both trust and “actionability” (i.e., the ability to act on the information). For example, the “unverified” tag does not capture whether a report is considered to be an act of “misinformation” or just incomplete, lacking contextual clues necessary to determine whether it is accurate or not.

We need to develop more flexibility to accommodate these different attributes, but we also need to think beyond these final determinations and understand that users might want contextual information (rather than a final determination on its verification status) to determine for themselves whether a report is trustworthy or not. After all, verification tags mean nothing unless those who must make decisions based on that information trust the team doing the verification.

The fact that many deployments are set up by teams of concerned citizens who may have never worked together before and who are therefore unknown to the user organizations makes this an important requirement. Here, we’re thinking of the job of the administering deployment team providing information about the context of a report (answering the who, what, where, when, how and why of traditional journalism perhaps) and inviting others to help flesh out this information, rather than being a “black box” in which the process for determining whether something is verified or not is opaque to users.

As an organization that is all about “crowdsourcing,” we’re taking a step back and thinking about how the crowd (i.e., people who are not known to the system) might assist in either providing more context for reports or verifying unverified reports. When I talk about the “crowd” here I’m referring to a system that’s permeable to interactions by those we don’t yet know. It’s important to note here that, although Ushahidi is talked about as an example of crowdsourcing, this doesn’t mean that the entire process of submission, publishing, tagging and commenting is open for all. Although anyone can start a map and send a report to the map, only administrators can approve and publish reports or tag a report as “verified.”

How Will Crowdsourcing Verification Work?

If we had to open up this process to “the crowd” we’d have to think really carefully about the options we might have in facilitating verification by the crowd — many of which won’t work in every deployment. Variables like scale, location and persistence differ in each deployment and can affect where and when crowdsourcing of verification will work and where it will do more harm than good.

Crowdsourcing verification can mean many different things. It could mean flagging reports that need more context and asking for more information from the crowd. But who makes the final decision that enough information has been provided to change the status of that information?

We could think of using the crowd to determine when a statistically significant portion of a community agrees with changing the status of a report to “verified.” But is this option limited to cases where a large volume of people are interested (and informed) about an issue, and could a volume-based indicator like this be gamed especially in political contexts?

Crowdsourcing verification could also mean providing users with the opportunity of using free-form tags to highlight the context of the data and then surfacing tags that are popular. But again, might this only be accurate when large numbers of users are involved and where the numbers of reports are low? Do we employ an algorithm to rank the quality of reports based on the history of their authors? It’s tempting to imagine that an algorithm alone will solve the data volume challenges, but algorithms do not work in many cases (especially when reports may be sent by people who don’t have a history of using these tools) and if they’re untrusted, they might force users to hack the system to enable their own processes.

An Enduring Question

Verification by the crowd is indeed a large and enduring question for all crowdsourced platforms, not just Ushahidi. The question is how we can facilitate better quality information in a way that reduces harms. One thing is certain: The verification challenge is both technical and social, and no algorithm, however clever, will entirely solve the problem of inaccurate or falsified information.

Thinking about the ecosystem of deployment teams, emergency personnel, users and concerned citizens and how they interact — rather than merely about a monolithic crowd — is the first place to look in understanding what verification strategy makes the most sense. After all, verification is not the ultimate goal here. Getting the right information to the right people at the right time is.

chile1.png

Image of the Basílica del Salvador in the aftermath of the Chilean earthquake courtesy of flickr user b1mbo.

Why the muggle doesn’t like the term “bounded crowdsourcing”

Patrick Meier just wrote a post explaining why the term he coined, “bounded crowdsourcing” is ‘important for crisis mapping and beyond’. He likens “bounded crowdsourcing” to “snowball sampling”, where a few trusted individuals invite other individuals who they ‘fully trust and can vouch for… And so on and so forth at an exponential rate if desired’.

I like the idea of trusted networks of people working together (actually, it seems that this technique has been used for decades in the activism community) but I have some problems with the term that has been “coined”. I guess I will be called a “muggle” but I am willing to take the plunge because a) I have never been called a “muggle” and I would like to know what it feels like and b) the “crowdsourcing” term is one I feel is worthy of a duel.

Firstly, I don’t agree with the way that Meier likens “crowdsourcing” work like Ushahidi to statistical methods. I see why he’s trying to make the comparison (to prove crowdsourcing’s value, perhaps?) but I think that it is inaccurate and actually de-values the work involved in building an Ushahidi instance. Working on an Ushahidi deployment is not the same as answering a question through statistical methods. With statistical methods, a researcher (or group of researchers) tries to answer a question or test a hypothesis. ‘Do the majority of Hispanic Americans want Obama to win a second term?’ for example. Or ‘What do Kenyans think is the best place to go on holiday?’

But Ushahidi has never been about gaining a statistically significant understanding of a question or hypothesis. It has been designed as a way for a group of concerned citizens to provide a platform for people to report on what was happening to them or around them. Sure, in many cases, we can get a general feel about the mood of a place by looking at reports, but the lack of a single question (and the power differential between those asking and those being asked), the prevalence of unstructured reports and the skewed distribution of reporters towards those most likely to reply using the technology (or attempting to game the system) make the differences much greater than the similarities.

The other problem is that the term lacks a useful definition. Meier seems to suggest that the “bounded” part refers to the fact that the work is not completely open and is limited to a network of trusted individuals. More useful would be to understand under what conditions and for what types of work different levels of openness are useful, because no crowdsourcing project is entirely “unbounded”. Meier says that he ‘introduced the concept of bounded crowdsourcing to the field of crisis mapping in response to concerns over the reliability of crowd sourced information.’ But if this means that “crowdsourced” information is unreliable, then it would be useful to understand how and when it is unreliable.

If we take the very diverse types of work required of an Ushahidi deployment, we might say that they include the need to customize the design, build the channels (sms short codes, twitter hashtags, etc), designate the themes, advertise the map, curate the reports, verify the reports, find related media reports, among others. Once we’ve broken down the different types of work, we can then decide what level of openness is required for each of these job types. I certainly don’t want to restrict the advertising of my map to the world, so I want to keep that as “unbounded” as possible. I want to ensure that there are enough people with some “ownership” of the map to keep them supporting and talking about it, so I want to give them some jobs that keep them involved. Tagging reports as “verified” is probably a more sensitive activity because it requires a set of transparent rulesets and is one of the key ways that others come to trust the map or not. So I want to ensure that trusted people, or at least those over whom I have some recourse, do this type of work. I also want to get feedback on themes and hashtags to keep it close to the people, since in the end, a map is only as good as the network that supports it. Now if I have different levels of openness for different areas of work, is my project an example of “bounded” or “unbounded” crowdsourcing?

Although I am always in favor of adding new words to the English language, I feel that the term “unbounded crowdsourcing” is unhelpful in leading us towards any greater understanding of the nuances of online work like this. Actually, I’m always surprised at the use of the term “crowdsourcing” over “peer production” in the crisis mapping community since crowdsourcing implies monetary or commercial incentivized work rather than the non-monetary incentives that characterised peer production projects like Wikipedia (see an expanded definition + examples here). I can’t imagine anyone ever “coining” the term “unbounded peer production” (but I seem to be continually surprised, so I should completely discount it from happening) and I think that this is indicative of the problems with the term.

So, yes, if we’re talking about different ways of improving the reliability of information produced on the Ushahidi platform, I’m excited to learn more about using trusted networks. I just think that if a term is being coined, it should be one that advances our understanding of what the theory is here. Is it that: if you restrict the numbers of people who can take part in writing reports, you get a more reliable result? Where do you restrict? What kind of work should be open? What do we mean by open? Automatic acceptance of Twitter reports with a certain hashtag? Or an email address that you can use to request membership? Is there a certain number that you should limit a team to (as the Skype example suggests)?

This “muggle” thinks that the term doesn’t get us any further towards understanding these (really important) questions. The “muggle” will now squeeze her eyes shut and duck.

What’s an ethnographer doing working for a software company anyway?

I wrote a short memo to the Ushahidi team about what exactly an ethnographer does and how ethnography as a discipline could be useful to Ushahidi (and Crowdmap in particular). I’m thinking of actually writing more about this and interviewing ethnographers working at technology companies to shed some light on this growing field.

What is ethnography?

Ethnography is a research method, with roots in anthropology, that aims to gain a rich perspective of user communities. Ethnographic research projects require the researcher to be deeply immersed in a specific research context (also called “participant observation”) and to develop an understanding that would not be achievable with other, more limited research approaches (Lazar, Feng, Hochheiser: 2010). Ethnography emerged from the practice of early anthropologists who studied “new” cultures Continue reading “What’s an ethnographer doing working for a software company anyway?”

A new chapter

Eight years ago, I applied to the Digital Vision Fellowship Program at Stanford University with an interest in developing GIS (Geographic Information Systems) tools to map conditions that could lead to conflict in the Great Lakes region of Africa. Benetech generously sponsored the fellowship hoping that I could help them with Martus, a human rights reporting tool that they were developing for people to report human rights abuses using computer networks. But when I got to Stanford (fresh from being transfixed by Larry Lessig for the first time) I started volunteering for Creative Commons and was so excited by the potential for cc in Africa that I did a 180 and worked on copyright reform and digital culture in Africa and globally for the next five years.

In 2009, driven by some of the hard questions that I started to ask myself about what we were doing with iCommons, I came back to the US to do my Masters at the UC Berkeley School of Information. It was here that I discovered ethnography in a class taught by the wonderful Jenna Burrell. Jenna is not only a great teacher (her classes actually demonstrate the philosophy that she’s trying to teach!) she also brought me around to thinking that there was a way that I could combine my passion for writing and journalism with deep, systematic analysis of where virtual and “real” worlds meet (and sometimes collide). And so I decided that I wanted to be an ethnographer.

But ethnography jobs in the tech sector seem to require PhDs and I was starting to give up on actually being able to find someone to give me a break. Last week, I saw a job posting on Ushahidi’s website for an ethnographer/behaviorist and I immediately wrote to Jon Gosier to ask what he required for the application. I have always had deep respect for Ory Okolloh who co-founded Ushahidi and Erik Hersman (aka “WhiteAfrican“) who is now Director of Operations and Strategy, and intuitively thought that it would be a really wonderful opportunity.

Jon called me yesterday to interview me for the job. He asked me to tell him my story, about the work I was doing and why I wanted to work for the Ushahidi platform. I briefly introduced him to my ethnographic work and he asked me to tell him more about my Wikipedia research. After a while, he said: ‘This is going to sound strange but your essay was one of the reasons why I dreamed up this position. It made me realize how this kind of work could really help what we’re doing. I wasn’t going to say anything when you applied because I wanted to hear why you wanted to join us, but I know all about you and was stoked when you applied for the job.’

I have been wondering for a long time how I would ever find anything that fitted me. I kept thinking about how I didn’t want to end up in a position where I didn’t have the freedom to be who I am, to speak out about what I’m passionate about, and to feed my passion for Africa while still doing something that is globally relevant. And then all of a sudden, the universe provided me a job that was – literally – made for me. Speaking to Jon, I felt like a gibbering wreck I was so blown away. The job will enable me to work on improving SwiftRiver and Ushahidi’s great tools for harnessing the social web. And since I’m working 70% time for them, I’ll get to do some teaching and writing on the side. I will be mostly be in the San Francisco Bay Area but the job isn’t dependent on a particular location so I’m hoping to spend some time in Kenya learning Swahili and researching Wikipedia as planned. More than that, I have no idea, but I feel like this new chapter is going to be a pretty exciting one. I start 1 June.