Calling for EOIs from research assistants interested in two new projects to study the ethics of knowledge graphs

I’ve just received some funding from UTS for two pilot studies relating to the ethics of knowledge graphs. I’m looking for a research assistant (or two) to work on the two projects below. More details below.

Project 1: Communicating Uncertainty in AI Visualisations: A UTS HASS-Data Science Institute research project (October 1-December 17, 2021)

Uncertainty is inevitable in AI because the historical data used to train algorithmic models are an approximation of facts and algorithmic models cannot perfectly depict events. But AI generated results are often presented to end users as stable and unchallenged. The knowledge graph is a case in point. The knowledge graph is a popular forms of knowledge representation in AI and data science. Popularised by Google but deployed by all major platforms, knowledge graphs power the auto-completion of search results, the generation of facts in knowledge panels in search results and the provision of answers to users’ questions in digital assistants, among others.

Knowledge graphs can serve as bridges between humans and systems and generate human readable explanations. But many knowledge graphs present knowledge as undisputed and unwavering, even when they are founded on approximations. The provenance of claims is often missing, making facts appear more authoritative but meaning that users don’t have the mechanism to trace facts back to their original authors. As a result, the lack of communication of uncertainty to end-users of AI has a significant impact on the responsible use of AI, such as the evaluation of the fairness and transparency of knowledge. Communicating results in ways that accurately reflect the uncertainty of claims and the reasons for that uncertainty must be a significant part of any ethical knowledge graph system.

Yet questions remain. 1) How does uncertainty affect users’ perceptions of fairness and transparency of AI? and 2) What are effective approaches to communicate uncertainty to end-users in AI-informed decision making? This project will investigate the knowledge graph as an approach for communicating uncertainty of AI. Drawing together the expertise of HASS researchers on the social relationships embedded in knowledge systems, and Data Scientists from the DSI on the technical aspects, the project will identify ways to communicate the uncertainty of AI with knowledge graphs, including the provenance of knowledge, and the ways in which this affects user perceptions of fairness and transparency.

The RA will be responsible for working with me on a comparative analysis of knowledge graph results (UX, affordances, policies and practices) in terms of their communication of uncertainty, provenance of data sources etc. We will also collaborate with the UTS Data Science Institute on experiments on AI-informed decision making with different uncertainty and knowledge graph condition (led by the Data Science Institute team including Distinguished Professor Fang Chen and Dr Jianlong Zhou).

Project 2: The ethics of knowledge graphs pilot project (October 4 until September 30, 2022)

Mentored by Professor Mark Graham and Professor Simon Buckingham-Shum

Knowledge graphs are data structures that store facts about the world in order for them to be processed by AI systems. Ordinary people around the world encounter knowledge graphs every day. They power Google’s “knowledge panel”, for example, the fact box on the right hand side of the page that appears after a Google search and lists facts about the entity that a user is searching for. Knowledge graphs are at the centre of the world’s most powerful information retrieval systems, recommender systems and question­-answering systems including Google’s Home, Apple’s Siri and Amazon’s Alexa.

Knowledge graphs represent a significant moment in the history of knowledge technologies because they replaced the platform as a source of possible avenues for answering users’ queries with a source of singular facts. This change has not been adequately understood and its ethical implications are still largely unexamined. I began to explore knowledge graphs in a 2015 study with my PhD supervisor, Professor Mark Graham from the Oxford Internet Institute. We found that Jerusalem was represented unequivocally as the capital of Israel by Google (even though it was not internationally recognised as such by the international community at the time) and that, although Google appeared to be sourcing claims in its knowledge graph from Wikipedia, it was representing Jerusalem very differently from the nuanced view on Wikipedia. I have since been collecting stories of knowledge graph failures informally. There is a significant opportunity in the systematic exploration of such failures in outlining the key ethical challenges of knowledge graphs in ways that will impact on practitioners’ ability to counter them.

This project will result in a public database of knowledge graph failures that can be used in ethical impact assessment work and in computer and information science ethics education. It will empower engineers to develop solutions that stay close to the everyday experiences of ordinary users. This research will positively impact on the user communities that are served by everyday automated knowledge technologies.

There are three goals of this pilot project.

  1. Produce a public database of knowledge graph failures from user and journalistic stories: 
    A website featuring an online database of knowledge graph failure cases from around the world (tentatively titled “knowledgegraphfail.net”) will be produced and published in July 2022. The database will be populated by news articles and social media posts about events in which knowledge graphs that power fact boxes and digital assistants have produced erroneous claims.
  2. Produce three case studies that can be used in computer and information science ethics classes: 
    Three examples in the database will be expanded, developed and illustrated as featured case studies for use in computer and information science ethics university classes and licensed under a Creative Commons Attribution Share­Alike license that will enable onward distribution.
  3. Produce an academic paper about “Ordinary ethics of knowledge graphs for the Web” and present initial findings at a relevant digital media ethics conference. Examples of knowledge graph failures will be classified according to ethical principles identified from the user stories in the database according to the “ordinary ethics” tradition. Ordinary ethics attends to how everyday practices reveal the moral commitments embedded in peoples’ actions, in contrast to the tendency to treat ethics as a form of argument or an abstraction.

I am looking for a Research Assistant to work with me on two projects relating to knowledge and AI from a social science perspective. This will require working two or three days a week from the beginning of October for the next year, with the possibility of concentrating work between terms (but with at least one major output due before the end of the year). The projects require working closely with colleagues in Data Science and in developing materials for use in data ethics classes. There are possibilities for my co-publishing with the Assistant, depending on their skills and interests. 

Skills required:

–          Excellent writing skills; 

–          Experience with qualitative data analysis; 

–          Experience with project management;  

–          Must be organised and a self-starter.

Not necessary but a plus: 

–          Ability to read and summarise technical literature; 

–          Knowledge of critical data studies, algorithm studies and the ethics of AI literature; 

–          Experience working in interdisciplinary teams. 

Please send your CV and a brief email about why you would make a good candidate to me @ Heather.Ford@uts.edu.au by the close of 22 September. Restricted to people with work rights in Australia. 

—————————

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s