What I’m talking about in 2016

Authority and authoritative sources, critical data studies, digital methods, the travel of facts online, bot politics and social media and politics. These are some of the things I’m talking about in 2016. (Just in case you thought the #sunselfies only indicated fun and aimless loafing).  

15 January Fact factories: How Wikipedia’s logics determine what facts are represented online. Wikipedia 15th birthday event, Oxford Internet Institute. [Webcast, OII event page, OII’s Medium post, The Conversation article]

29 January Wikipedia and me: A story in four acts. TEDx Leeds University. [Video, TEDx Leeds University site]

Abstract: This is a story about how I came to be involved in Wikipedia and how I became a critic. It’s a story about hope and friendship and failure, and what to do afterwards. In many ways this story represents the relationship that many others like me have had with the Internet: a story about enormous hope and enthusiasm followed by disappointment and despair. Although similar, the uniqueness of these stories is in the final act – the act where I tell you what I now think about the future of the Internet after my initial despair. This is my Internet love story in four acts: 1) Seeing the light 2) California rulz 3) Doubting Thomas 4) Critics unite. 

17 February. Add data to methods and stir. Digital Methods Summer School. CCI, Queensland University of Technology, Brisbane [QUT Digital Methods Summer School website]

Abstract: Are engagements with real humans necessary to ethnographic research? In this presentation, I argue for methods that connect data traces to the individuals who produce them by exploring examples of experimental methods featured on the site ‘EthnographyMatters.net’, such as live fieldnoting, collaborative mapmaking and ‘sensory postcards’.  This presentation will serve as an inspiration for new work that expands beyond disciplinary and methodological boundaries and connects the stories we tell about our things with the humans who create them.  

10 March. Situating Innovations in Digital Measures. University of Leeds, Leeds Critical Data Studies Inaugural Event.  

Abstract: Drawn from case studies that were presented at the recent Digital Methods Summer School (Digital Media Research Centre, Queensland University of Technology) in Brisbane, Australia last month, as well as from experimental methods contributed to by authors of the Ethnography Matters community, this seminar will present a host of inspiring methodological tools that researchers of digital culture and politics are using to explore questions about the role of digital technologies in modern life. Instead of data-centric models and methodologies, the seminar focuses on human-centric models that also engage with the opportunities afforded by digital technologies. 

21-22 April. Ode to the infobox. Streams of Consciousness: Data, Cognition and Intelligent Devices Conference. University of Warwick.

Abstract: Also called a ‘fact box’, the infobox is a graphic design element that highlights summarised statements or facts about the world contained within it. Infoboxes are important structural elements in the design of digital information. They usually hang in the right-hand corner of a webpage, calling out to us that the information contained within them is special and somehow apart from the rest. The infobox satisfies our rapid information-seeking needs. We’ve been trained to look to the box to discover, not just another set of informational options, but an authoritative statement of seemingly condensed consensus emerging out of the miasma of data about the world around us.

When you start to look for them, you’ll see infoboxes wherever you look. On Google, these boxes contain results from Google’s Knowledge Graph; on Wikipedia they are contained within articles and host summary statistics and categories; and on the BBC, infoboxes highlight particular facts and figures about the stories that flow around them.

The facts represented in the infoboxes are no longer as static as the infoboxes of old. Now they are the result of algorithmic processes that churn thousands, sometimes millions of data points according to rulesets that produce relatively unique encounters by each new user.

In this paper, I trace the multitude of instructions and sources, institutions and people that constitute the assemblage that results in different facts for different groups at different times. Investigating infoboxes on Wikipedia and Google through intermediaries such as Wikidata, I build a portrait of the pipes, processes and people that feed these living, dynamic frames. The infobox, humble as it seems, turns out to be a powerful force in today’s deeply connected information ecosystem. By celebrating the infobox, I hope to reveal its hidden power – a power with consequences far beyond the efficiency that it promises.

29 April. How facts travel in the digital age. Social Media Lab Guest Speaker Series, Ryerson University, Social Media Lab, Toronto, Canada. [Speaker series website]

Abstract: How do facts travel through online systems? How is it that some facts gather steam and gain new adherents while others languish in isolated sites? This research investigates the travel of two sets of facts through Wikipedia’s networks and onto search engines like Google. The first: facts relating to the 2011 Egyptian Revolution; the second: facts relating to “surr”, a sport played by men in the villages of Northern India. While the Egyptian Revolution became known to millions across the world as events were reported on multiple Wikipedia language versions in early 2011, the facts relating to surr faced enormous challenges as its companions attempted to propel it through Wikipedia’s infrastructure. Following the facts as they travelled through Wikipedia gives us an insight into the source of systemic biases of Internet infrastructures and the ways in which political actors are changing their strategies in order to control narratives around political events. 

8 June. Politicians, Journalists, Wikipedians and their Twitter bots. Algorithms, Automation and Politics. (Heather Ford, Elizabeth Dubois, Cornelius Puschmann) ICA Pre-Conference, Fukuoka, Japan. [Event website]

Abstract selection: Recent research suggests that automated agents deployed on social media platforms, particularly Twitter, have become a feature of the modern political communication environment (Samuel, 2015, Forelle et al, 2015, Milan, 2015). Haustein et al (2016) cite a range of studies that put the percentage of bots among all Twitter accounts at 10-16% (p. 233). Governments have been shown to employ social media experts to spread pro-governmental messages (Baker, 2015, Chen 2015), political parties pay marketing companies to create or manipulate trending topics (Forelle et al, 2015), and politicians and their staff use bots to augment the number of account followers in order to provide an illusion of popularity to their accounts (Forelle et al, 2015). The assumption in these analyses is that bots have a direct influence on public opinion and that they can act as credible and competent sources of information (Edwards et al, 2014). There is still, however, little empirical evidence of the link between bots and political discourse, the material consequences of such changes or how social groups are reacting.

9-13 June. The rise of expert amateurs in the realm of knowledge production: The case of Wikipedia’s newsworkers. In the panel: ‘The new gatekeepers’. International Communications Association 2016 Conference, Fukuoka, Japan.

Wikipedia has become an authoritative source about breaking news stories as they happen in many parts of the world. Although anyone can technically edit a Wikipedia article, recent evidence suggests that some have significantly more power than others when it comes to being able to have edits sustained over time. In this paper, I suggest that the theory of co-production, elaborated upon by Sheila Jasanoff, is a useful way of framing how, rather than a removal of the gatekeepers of the past, Wikipedia demonstrates two key trends. The first is the rise of a new set of gatekeepers in the form of experienced Wikipedians who are able to deploy coded objects effectively in order to stabilize or destabilize an article, and the second is a reconfiguration in the power of traditional sources of news and information in the choices that Wikipedia editors make when writing about breaking news events.

9-13 June. Wikipedia: Moving Between the Whole and its Traces. In the panel: ‘Drowning in Data: Industry and Academic Approaches to Mixed Methods in “Holistic” Big Data Studies’. International Communications Association 2016 Conference, Fukuoka, Japan.

In this paper, I outline my experiences as an ethnographer working with data scientists to explore various questions surrounding the dynamics of Wikipedia sources and citations. In particular, I focus on the moments at which we were able to bring the small and the large into conversation with one another, and moments when we looked, wide-eyed at one another, unable to articulate what had gone wrong. Inspired by Latour’s (2010) reading of Gabriel Tarde, I argue that a useful analogy for conducting mixed methods for studies about which large datasets and holistic tools are available is the process of life drawing – a process of moving up close to the easel and standing back (or to the side) as the artist looks at both their subject and the canvas in a continual motion.

Wikipedia’s citation traces can be analysed in their aggregate – piled up, one on top of the other to indicate the emergence of new patterns, new vocabulary, new authorities of knowledge in the digital information environment. But citation traces take a particular shape and form, and without an understanding of the behaviour that lies behind such traces, the tendency is to count what is available to us, rather than to think more critically about the larger questions that Wikipedia citations help to answer.

I outline a successful conversation which happened when we took a large snapshot of 67 million source postings from about 3.5 million Wikipedia articles and attempted to begin classifying the citations according to existing frameworks (Ford 2014). In response, I conducted a series of interviews with editors by visualising their citation traces and asking them questions about the decision-making and social interaction that lay behind such performances (Dubois and Ford 2015). I also reflect on a less successful moment when we attempted to discover patterns in the dataset on the basis of findings from my ethnographic research into the political behaviour of editors. Like the artist who had gotten their proportions wrong when scaling up the image on the canvas, we needed to re-orient ourselves and remember what we were trying to ultimately discover.

References:

Dubois, E., & H. Ford. 2015. “Qualitative Political Communication: Trace Interviews: An Actor-Centered Approach.” International Journal of Communication 9 (0): 25.

Ford, H. 2014. “Big Data and Small: Collaborations between Ethnographers and Data Scientists.” Big Data & Society 1 (2): 2053951714544337. doi:10.1177/2053951714544337.

Latour, B. “Tarde’s idea of quantification.” In The Social After Gabriel Tarde: Debates and Assessments, edited by Matei Candea, 145-162. Routledge, 2010.

11 June. Wikipedia: Moving Between the Whole and its Traces. In ‘Drowning in Data: Industry and Academic Approaches to Mixed Methods in “Holistic” Big Data Studies’ panel. International Communication Association Conference. Fukuoka, Japan. [ICA website]

Abstract: In this paper, I outline my experiences as an ethnographer working with data scientists to explore various questions surrounding the dynamics of Wikipedia sources and citations. In particular, I focus on the moments at which we were able to bring the small and the large into conversation with one another, and moments when we looked, wide-eyed at one another, unable to articulate what had gone wrong. Inspired by Latour’s (2010) reading of Gabriel Tarde, I argue that a useful analogy for conducting mixed methods for studies about which large datasets and holistic tools are available is the process of life drawing – a process of moving up close to the easel and standing back (or to the side) as the artist looks at both their subject and the canvas in a continual motion.

Wikipedia’s citation traces can be analysed in their aggregate – piled up, one on top of the other to indicate the emergence of new patterns, new vocabulary, new authorities of knowledge in the digital information environment. But citation traces take a particular shape and form, and without an understanding of the behaviour that lies behind such traces, the tendency is to count what is available to us, rather than to think more critically about the larger questions that Wikipedia citations help to answer.

I outline a successful conversation which happened when we took a large snapshot of 67 million source postings from about 3.5 million Wikipedia articles and attempted to begin classifying the citations according to existing frameworks (Ford 2014). In response, I conducted a series of interviews with editors by visualising their citation traces and asking them questions about the decision-making and social interaction that lay behind such performances (Dubois and Ford 2015). I also reflect on a less successful moment when we attempted to discover patterns in the dataset on the basis of findings from my ethnographic research into the political behaviour of editors. Like the artist who had gotten their proportions wrong when scaling up the image on the canvas, we needed to re-orient ourselves and remember what we were trying to ultimately discover.

13 June. The rise of expert amateurs in the realm of knowledge production: The case of Wikipedia’s newsworkers. In ‘Dialogues in Journalism Studies: The New Gatekeepers’ panel. International Communication Association Conference. Fukuoka, Japan. [ICA website]

Abstract: Wikipedia has become an authoritative source about breaking news stories as they happen in many parts of the world. Although anyone can technically edit a Wikipedia article, recent evidence suggests that some have significantly more power than others when it comes to being able to have edits sustained over time. In this paper, I suggest that the theory of co-production, elaborated upon by Sheila Jasanoff, is a useful way of framing how, rather than a removal of the gatekeepers of the past, Wikipedia demonstrates two key trends. The first is the rise of a new set of gatekeepers in the form of experienced Wikipedians who are able to deploy coded objects effectively in order to stabilize or destabilize an article, and the second is a reconfiguration in the power of traditional sources of news and information in the choices that Wikipedia editors make when writing about breaking news events.

26 October. What’s next for Critical Data Studies? Leeds Critical Data Studies Group Seminar Series. Leeds Institute for Data Analytics, University of Leeds.

This talk will provide an overview of some of the main concerns and key thinkers in the areas of critical data studies, algorithmic studies, software studies and related areas. What is there to be critical about when it comes to digital data and the socio-technical arrangements that have been reconfigured by the surge of interest and practice in data collection and analysis? What is the value of such critique? And where should we at the University of Leeds be heading when it comes to making research interventions in this area? The session will begin with a presentation but will be followed by an open discussion about the university’s role in the field of critical data studies more specifically.

9 November. “Anyone can edit” Not everyone does. Wikipedia and the gender gap. University of Leicester, MediaCom Seminar Series.

Feminist STS has long established that science’s provenance as a male domain continues to define what counts as knowledge and expertise. Wikipedia, arguably one of the most powerful sources of information today, was initially lauded as providing the opportunity to rebuild knowledge institutions by providing greater representation of multiple groups. Yet, less than ten percent of Wikipedia editors are women and large parts of the developing world remain underrepresented on Wikipedia. This talk takes as its starting point an article currently under review which is co-authored by Heather Ford and the feminist Science and Technology Studies scholar, Judy Wajcman. The goal is to present a consolidated analysis of the gendering of Wikipedia using the literature of platform and infrastructure studies and to suggest a method for analyzing the origins of bias on platforms.

18 November. Twitter and governance in South Africa. “Social Media, Conflict & Democracy: Utopian Visions, Dystopian Futures and Pragmatic Policies”. Brussels, Belgium.

8-9 December. “Is the Web eating itself? Search and the voracious desire for fast information” Automating the Everyday. Queensland University of Technology, Brisbane, Australia.

We want information; we need it now. At work, in school, at play and in transit between these activities we use search engines to find answers to questions about the world. What is Brexit? What’s showing in a movie theatre near me? How good is the coffee at this cafe? In addition to a list of possible sources of answers to our questions, search engines now process data from those sources in order to present answers to our questions directly, instead of having to navigate away in order to fulfill our needs. Search is no longer only about indexing the Web; it is about extracting the Web’s data. In a development fueled by early visions of the Semantic Web, linked data and artificial intelligence, search engines are becoming authoritative sources of knowledge about the world.

Search has always been important to thinking about how everyday life is mediated in the context of networked technologies but this change, and the algorithmic processes behind it, make the practices of searching (and the range of actions and possibilities enabled by it) more significant than ever. According to recent reports, Google  now processes over 40,000 search queries every second. That figure translates to over 3.5 billion searches per day and about 100 billion searches every month. Search engines are feeding our voracious need for rapid answers to questions about the world around us. Webpages are now being structured according to common data models that abstract information into formats that are more easily processed by search engines. In order to feed the machine, however,  many websites are suffering. Wikipedia, for example, has found that, since Google answers queries about people, places and things using Wikipedia data directly, their own user numbers have decreased. This pattern plays out across news, photography, publishing and a range of other sites.

Search is driven by our voracious appetite for fast information — information delivered to us by the search engine machine that processes data and extracts value from other websites as the machine grows in size, in power and in profit. This paper proposes a thought experiment: if our desire for fast information results in search engines extracting its ingredients from the Web in unsustainable ways, will we soon run out of sources to fuel our insatiable desires? Is this one of the possible futures of everyday automation? Or is this new step in search just more of the same: a slow march towards the inevitable dominance of a worldview designed by a few companies while the rest remain skinny and undernourished — least of all us, its users, who remain blind to the deepening influence of the machine?

14-17 December. Geertz’s Map: The path between observation and interpretation using digital methods Crossroads in Cultural Studies. Sydney, Australia.

We chart the histories of the term ‘thick data’ (Wang 2014) to its roots in Geertz’s (1973) writing on ‘thick description’, outlining the practical implications for Geertz’s theory of culture to the study of online behavior. Geertz writes that ‘culture is not a power, something to which social events, behaviors, institutions, or processes can be causally attributed; it is a context, something within which they can be intelligibly — that is, thickly — described’. While the majority of tools or instruments for studying ‘big’ social media data enable us to observe patterns of behavior, we need theoretical frameworks to interpret that behavior. Theory makes thick description possible, writes Geertz, not by the analyst generalizing across cases but by generalizing within them. We applied this principle to the study of political conflict and the case of Twitter discussions of the 2015/6 South African State of the Nation Address. Bespoke quantitative tools provided an observational lens, and framing theory (Entman 1993) enabled us to develop qualitative interpretations. We argue that this two-step approach is necessary in order to move towards interpretations of behavior in context and that, while quantitative tools that enable observation are clearly important, interpretation requires the use of relevant theory. (Heather Ford, Lone Sorensen, Tanja Bosch, Walid Al-Saqaf)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s