Human-bot relations at ICA 2017 in San Diego

News this week that a panel I contributed to on political bots has been accepted for the annual International Communication Association (ICA) conference in San Diego with Amanda Clarke, Elizabeth Dubois, Jonas Kaiser and Cornelius Puschmann this May. Political bots are automated agents that are deployed on social media platforms like Twitter to perform a variety of functions that are having a significant impact on politics and public life. There is already some great work about the negative impact of bots that are used to “manipulate public opinion by megaphoning or repressing political content in various forms” (see politicalbots.org) but we were interested in the types of bots these bots are often compared to — the so-called “good” bots that expose the actions of particular types of actors (usually governments) and thereby bring about greater transparency of government activity.

Elizabeth, Cornelius and I worked on a paper about WikiEdits bots for ICA last year in the pre-conference: “Algorithms, Automation, Politics” (“Keeping Ottawa Honest — One Tweet at a Time?” Politicians, Journalists and their Twitter bots, PDF) where we found that the impact of these bots isn’t as simple as bringing about greater transparency. The new work that we will present in May is a deeper investigation of the types of relationships that are catalysed by the existence and ongoing development of transparency bots on Twitter. I’ll be working on the relationship between bots and their creators in both Canada and South Africa, attempting to investigate the relationship between the bots and the transparency that they promise. Cornelius is looking at the relationship between journalists and bots, Elizabeth and Amanda are looking at the relationship between bots and political staff/government employees, and Jonas will be looking more closely at bots and users. The awesome Stuart Geiger who has done some really great work on bots has kindly agreed to be a respondent to the paper.

You can read more about the panel and each of the papers below.

Do people make good bots bad?

Political bots are not necessarily good or bad. We argue the impact of transparency bots (a particular kind of political bot) rests largely on the relationships bots have with their creators, journalists, government and political staff, and the general public. In this panel each of these relationships is highlighted using empirical evidence and a respondent guides wider discussion about how these relationships interact in the wider political and media system.

This panel challenges the notion that political bots are necessarily good or bad by highlighting relationships between political actors and transparency bots. Transparency bots are automated social media accounts which report behaviour of political players/institutions and are normally viewed as a positive force for democracy. In contrast, bot activity such as astroturfing and the creation of fake followers or friends on social media has been examined and critiqued as nefarious in academic and popular literature. We assert that the impact of transparency bots rests largely on the relationships bots have with their creators, journalists, government and political staff, and the general public. Each panelist highlights one of these relationships (noting related interactions with additional actors) in order to answer the question “How do human-bot relationships shape bots’ political impact?”

Through comparative analysis of the Canadian and South African Wikiedits bots, Ford shows that transparency is not a potential affordance of the technology but rather of the conditions in place between actors. Puschmann considers the ways bots are framed and used by journalists in a content analysis of news articles. Dubois and Clarke articulate the ways public servants and political staff respond to the presence of Wikiedits bots revealing that internal institutional policies mediate the relationships these actors can have with bots. Finally, Kaiser asks how users who are not political elite actors frame transparency bots making use of a quantitative and qualitative analysis of Reddit content.

Geiger (respondent) then poses questions which cut across the relationships and themes brought out by panelists. This promotes a holistic view of the bot in their actual communicative system. Cross-cutting questions illustrate that the impact of bots is seen not simply in dyadic relationships but also in the ways various actors interact with each other as well as the bots in question.

This panel is a needed opportunity to critically consider the political role and impact of transparency bots considering the bot in context. Much current literature assumes political bots have significant agency, however, bots need to interact with other political actors in order to have an impact. A nuanced understanding of the different types of relationships among political actors and bots that exists is thus essential. The cohesive conversation presented by panelists allows for a comparison across the different kinds of bot-actor relationships, focusing in detail on particular types of actors and then zooming out to address the wider system inclusive of these relationships.

  1. Bots and their creators
    Heather Ford

Bots – particularly those with public functions such as government transparency – are often created and recreated collaboratively by communities of technologists who share a particular world view of democracy and of technology’s role in politics and social change. This paper will focus on the origins of bots in the motivations and practices of their creators focusing on a particular case of transparency bots. Wikipedia/Twitter bots are built to tweet every time an editor within a particular government IP range edits Wikipedia as a way of notifying others to check possible government attempts to manipulate facts on the platform. The outputs of Wikipedia/Twitter bots have been employed by journalists as sources in stories about governments manipulating information (Ford et al, 2016).

Investigating the relationship between bot creators and their bots in Canada and South Africa by following the bots and their networks using mixed methods, I ask: To what extent is transparency an affordance of the particular technology being employed? Or is transparency rather an affordance of the conditions in place between actors in the network? Building from theories of co-production (Jasanoff, 2004) and comparing the impact of Wikipedia/Twitter bots on the news media in Canada and South Africa, this paper begins to map out the relationships that seem to be required for bots to take on a particular function (such as government transparency). Findings indicate that bots can only become transparency bots through the enrolling of allies (Callon, 1986) and through particular local conditions that ensure success in achieving a particular outcome. This is a stark reminder of the connectedness of human-machine relations and the limitations on technologists to fully create the world they imagine when they build their bots.

 

2. Bots and Journalists
Cornelius Puschmann

Different social agents — human and non-human — compete for attention, spread information and contribute to political debates online. Journalism is impacted by digital automation in two distinct ways: Through its potentially manipulative influence on reporting and thus public opinion (Woolley & Howard, 2016, Woolley, 2016), and by providing journalists with a set of new tools for providing insight, disseminating information, and connecting with audiences (Graefe, 2016; Lokot & Diakopoulos, 2015). This contribution focuses primarily on the first aspect, but also takes the second into account, because we argue that fears of automation in journalism may fuel reservations among journalists regarding the role of bots more generally.

To address the first aspect, we present the results of a quantitative content analysis of English-language mainstream media discourse on bots. Building on prior research on the reception of Bots (Ford et al, 2016), we focus on the following aspects in particular:

– the context in which bots are discussed,

– the evaluation (“good” for furthering transparency, “bad” because they spread propaganda),

– the implications for public deliberation (if any).

Secondly, we discuss the usage of bots and automation for the news media, using a small set of examples from the context of automated journalism (Johri, Han & Mehta, 2016). Bots are increasingly used to automate particular aspects of journalism, such as the generation of news items and the dissemination of content. Building on these examples we point to the “myriad ways in which news bots are being employed for topical, niche, and local news, as well as for providing higher-order journalistic functions such as commentary, critique, or even accountability” (Lokot & Diakopoulos, 2015, p. 2).

 

3. Bots and Government/Political Staff
Elizabeth Dubois and Amanda Clarke

Wikiedits bots are thought to promote more transparent, accountable government because they expose the Wikipedia editing practices of public officials, especially important when those edits are part of partisan battles between political staff, or enable the spread of misinformation and propaganda by properly neutral public servants. However, far from bolstering democratic accountability, these bots may have a perverse effect on democratic governance. Early evidence suggests that the Canadian Wikiedits bot (@gccaedits) may be contributing to a chilling effect wherein public servants and political staff are editing Wikipedia less or editing in ways that are harder to track in order to avoid the scrutiny that these bots enable (Ford et al, 2016). The extent to which this chilling effect shapes public officials’ willingness to edit Wikipedia openly (or at all), and the role the bot plays in inducing this chilling effect, remain open questions ripe for investigation. Focusing on the bot tracking activity in the Government of Canada (@gccaedits), this paper reports on the findings of in-depth interviews with public and political officials responsible for Wikipedia edits as well as analysis of internal government documents related to the bot (retrieved through Access to Information requests).

We find that internal institutional policies, constraints of the Westminster system of democracy (which demands public servants remain anonymous, and that all communications be tightly managed in strict hierarchical chains of command), paired with primarily negative media reporting of the @gccaedits bot, have inhibited Wikipedia editing. This poses risks to the quality of democratic governance in Canada. First, many edits revealed by the bot are in fact useful contributions to knowledge, and reflect the elite and early insider insight of public officials. At a larger level, these edits represent novel and significant disruptions to a public sector communications culture that has not kept pace with the networked models of information production and dissemination that characterize the digital age. In this sense, the administrative and journalistic response to the bot’s reporting sets back important efforts to bolster Open Government and digital era public service renewal. Detailing these costs, and analysing the role of the bot and human responses to it, this paper suggests how wikiedit bots shape digital era governance.

4. Bots and Users
Jonas Kaiser

Users interact online with bots on a daily basis. They tweet, upvote or comment, in short: participate in many different communities and are involved in shaping the user’s perceptions. Based on this experience the users’ perspective on bots may differ significantly from journalists, bot creators or political actors. Yet it is being ignored in the literature up to now. As such we are missing an integral perspective on bots that may help us to understand how the societal discourse surrounding bots is structured. To analyze how and in which context users talk about transparency bots specifically a content analysis and topic analysis of Reddit comments from 86 posts in 48 subreddits on the issue of Wikiedits bots will be conducted. This proposal’s research focuses on two major aspects: how Reddit users 1) frame and with what other 2) topics they associate transparency bots.

Framing in this context is understood as “making sense of relevant events, suggesting what is at issue” (Gamson & Modigliani, 1989, p. 3). Even though some studies have shown, for example, how political actors frame bots (Ford, Dubois, & Puschmann, 2016) a closer look at the user’s side is missing. But this perspective is important as non-elite users may have a different view than the more elite political actors that can help us understand in how they interpret bots. This overlooked perspective, then, could have meaningful implications for political actors or bot creators. At the same time it is important to understand the broader context of the user discourse on transparency bots to properly connect the identified frames with overarching topics. Hence an automated topic modeling approach (Blei, Ng & Jordan, 2003) is chosen to identify the underlying themes within the comments. By combining frame analysis with topic modeling this project will highlight the way users talk about transparency bots and in which context they do so and thus emphasize the role of the users within the broader public discourse on bots.

Bibliography

Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research, 3, 993-1022.

Callon, M. (1986). “Some Elements of a Sociology of Translation: Domestication of the Scallops and the Fishermen of St Brieuc Bay”. In John Law (ed.), Power, Action and Belief: A New Sociology of Knowledge (London: Routledge & Kegan Paul).

Ford, H., Dubois, E., & Puschmann, C. (2016). Automation, Algorithms, and Politics | Keeping Ottawa Honest—One Tweet at a Time? Politicians, Journalists, Wikipedians and Their Twitter Bots. International Journal of Communication, 10, 24.

Gamson, W. A., & Modigliani, A. (1989). Media Discourse and Public Opinion on Nuclear Power: A Constructionist Approach. American Journal of Sociology, 95(1), 1-37.

Graefe, A. (2016). Guide to automated journalism. http://towcenter.org/research/guide-to-automated-journalism/

Jasanoff, S. (2004). States of Knowledge: The Co-Production of Science and the Social Order. (London: Routledge Chapman & Hall)

Johri et al. (2016). Domain specific newsbots. Live automated reporting systems involving natural language communication. Paper presented at 2016 Computation + Journalism Symposium.

Lokot, T. & Diakopoulos, N. (2015). News bots: Automating news and information dissemination on Twitter. Digital Journalism. doi: 10.1080/21670811.2015.1081822

Woolley, S. C. (2016). Automating power: Social bot interference in global politics. First Monday. doi: 10.5210/fm.v21i4.6161

Woolley, S. C., & Howard, P. (2016). Bots unite to automate the presidential election. Retrieved Jun. 5, 2016, from http://www.wired.com/2016/05/twitterbots-2/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s