Wikipedia narratives

I’ve been spending the last few days thinking about my upcoming research into how Wikipedians currently use and understand sources and citations in different situations (directly after a major international news event like the Japan earthquake and in conflict situations such as the Middle East conflict) and what kinds of software tools could be helpful in advancing some of the goals and philosophies of Wikipedia globally. I’ve learned that there are a number of problems that Wikipedians encounter with the current policies and tools – including what some view as conservatism around sources (bias in favor of traditional, often inaccessible print-based materials and against online sources and commercial research, for example) and some specific observations of how blunt a tool MediaWiki is at supporting citations in emotionally-charged edit wars and in rapidly evolving events.

The big questions that I’m trying to answer in this research are:

1. What debates are Wikipedians having around sources and what does this say about how Wikipedians understand the verifiability policy?
2. What is the effect of the technical features and affordances of current wiki tools on issues of quality (for example, the ability of Wikipedia to be a current and accurate source of information on rapidly-evolving events) and diversity (how Wikipedia might incorporate a wider range of viewpoints that may be situated outside of traditional academic publications)?
3. How might alternative policies and tools affect those principles of quality and diversity?

These are great questions! And great questions are always a good place to start. But deciding on how to answer these questions with limited time and resources is where I need to get creative and hopefully ask for help from wise academics, Wikipedians and friends. These are some of my initial ideas and as a newbie ethnographer I’m hoping for some kind but realistic responses.

I was originally going to start off with a bunch of interviews of Wikipedians working on topics related to big international events. But after chatting to a number of Wikipedians at Wikimania and doing a few more open-ended interviews, I’ve actually realised that starting with particular articles and telling the story of those articles could actually get me to a better understanding of what’s going on in “source talk” a lot quicker. I think that I was initially caught up in the regular social science and usability methods of conducting research where you decide on your sample, go out and collect responses to a specific set of questions and then analyse the data. Spending the past two days reading and analysing talk pages for the ‘hummus’ articles in English, Hebrew and Arabic has made me realise that there is a wealth of incredible data about how Wikipedians actually talk about sources and that a better approach could be more like a detective – starting with the little pieces of evidence and then following the story as I interview the characters reflected in the articles.

Like a detective (or at least the ones in the movies) I’m working towards understanding motivations in order to piece together narratives about what happened and why. When I first thought of Wikipedia article debates, I envisioned a large boardroom table with people sitting around it rationally discussing what to add and what to leave out. Actually, the debates are much more like noisy town hall meetings. You have the crazy person who keeps shouting all sorts of completely irrelevant details and then complaining that they’re being systematically ignored, the exhausted public administrator who has seen the same arguments play out over and over again and who is snippy and terse when newcomers try to cover old territory. There’s the polite newbie who is surely too polite to be making genuine statements, and the loud Westerners who drop into the meeting to make sarcastic remarks about how stupid everyone is for fighting about such trivial matters. I think that these narratives — who is allied to who, what happens to the debate when related national or international events hit, how disruptive editors can deadlock entire articles — are actually at the heart of bigger questions about verifiability and that, especially when we’re thinking of designing new tools to fit into current working methods of users and not the other way round, then understanding exactly how the articles tick makes sense to me.

I started by printing out articles and talk pages for ‘hummus’ in WP English, Hebrew and Arabic (Google translated version at least) and did some rough analysis and coding (using present participles to denote what I thought was happening) plus notes relating to what was interesting in comparison to the other language versions. I will go over this again and then type up themes with related quotations and summarised stories, then follow up leads with some of the editors who were involved, code all of their interviews, add to the thematic groupings and then do the final analysis.

After I’ve done the same thing for a page relating to an international news event (either the 2011 Egyptian Revolution or the 2011 Japanese Earthquake which I have started to look at), I’m hoping to be able to make some good conclusions about how Wikipedians understand verifiability and what the effect of current policies and tools are on issues of quality and diversity. You’ll notice that I’m choosing to go deep rather than wide but in order to really ‘people’ this analysis and understand who is behind the pages and what the dynamics are, I’m thinking that this might be the best way of going about it.

Would love any (kind) thoughts, suggestions and even, yes, encouragement in my lonely space over here 🙂

3 thoughts on “Wikipedia narratives

  1. Sounds like a great way to tackle those questions to me.

    This is totally not Deep Thoughts or anything, but just in case it’s of any use — Folks at the I-School seem not so into qual. analysis software, which is fine, but personally I find it super-helpful for keeping track of stuff, helping me to see what ideas are emerging and whatnot. So… maybe you’re already using that kind of software, or maybe you’re sure you don’t want to use software specifically for that, I dunno, but just throwing that out there.

  2. I really love Atlas TI (used it for a privacy project with Deidre Mulligan) but it’s so expensive and only on Windows. Do you have anything to recommend??

  3. Hmm, wow. Am looking at some of the popular ones and they all seem to be Windows-based. Transana is cross-platform ( but to me not being able to do anything with text searches across docs is a pretty big drawback. But it does seem to do lots of other cool stuff.

    It might be worth it to check out Stuart Shulman’s Coding Analysis Toolkit (or maybe the PCAT thing? There’s some mention of wikiness in the description.)

    I have used CAT in the past, but only for intercoder reliability ratings with stuff that had been coded in Atlas — was good though.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s