Data Ethics: Implications for Traditional Media, Social Media and the Communication Industry

Data Ethics: Implications for Traditional Media, Social Media and the Communication Industry

This Thursday (3 March, 2022) 2pm – 3.15pm (AEDT) will be a panel to launch the new Technology, Media and Strategy (TMS) Research Group at the School of Communications, UTS. I’ll be talking about my new work around uncertainty in question-answering systems that I’m doing with colleagues at the university. Below is the order of events and the abstracts of some super interesting panels relating to data ethics across the school. Ping me if you’d like the Zoom link.

Welcome: Professor and Associate Dean of Research FASS, Noel Castree
Overview of the Research Group’s Vision and Scope: Professor Maureen Taylor

Intimate AI: A definition and expert systems framework for user feedback and ethical AI development and implementation
Senior Lecturer Belinda Middleweek | Journalism

Human users are interacting with computers, robots, virtual agents and other AI-enabled technologies as if they were human. Ethical considerations for development are paramount in technologies being used in such intimate human relationships. In this article we examine current Human Computer and Robotic Interaction research (HCI, HRI respectively) and argue that the tendency to view as artificial the relationships people forge with technologies (rather than the intelligence itself) in the use of terms such as ‘artificial partner’ and ‘artificial intimacy’, belie value judgments about the quality of those interactions. Using our analysis of an online ‘bug’ forum for a leading AI-enabled virtual partner app, we found users desire a more human-like interaction and the bonds they form with technology are no less legitimate than human-to-human interactions. We propose the more inclusive term “Intimate AI” to describe this relationship and, based on an expert systems approach to ‘bug’ report responses, propose a socially informed applied Expert Systems design framework for the programming and designing of intimate AI that accounts for user feedback and ethical AI development.

Communicating uncertainty in AI and data-centric systems
Associate Professor Heather Ford | Digital and Social Media

We live in a world riddled by uncertainty. Big data and AI promise to alleviate the discomfort of uncertainty with their apparent access to a higher truth obtained through access to colossal masses of empirical data. This promise is reinforced by the lack of attention by technologists to communicating the inevitable uncertainty of data presented as a result of such systems. I propose that the communication of uncertainty in data and AI systems offers promise for enhancing the agency of users to make more informed decisions using data and in weakening AI systems’ illusion of the god’s eye view. Exploring a framework of uncertainty communication applied to the results of smart search (e.g. Google’s knowledge panels) in terms of the magnitude, source, level and object of uncertainty, I argue, enables us to reimagine more ethical data systems that millions of people use to make decisions in their everyday life.

Limits to porting the Hippocratic Oath from medicine to data governance
Dr Suneel Jethani | Digital and Social Media

This presentation will report on recently published theoretical and ongoing empirical research that seeks to understand the limitations of a “Hippocratic Oath” as a guiding mechanism for contemporary data ethics. Having been raised in Wired, by senior figures at Microsoft, and by the European Data Protection Supervisor as a soft regulatory mechanism, the idea of an oath suggests that practitioners must act in the best interests of data-subjects, avoid self-interest, and maintain the integrity of a “profession”. My critique of a Hippocratic Oath for data is structured by three questions. How does it function to shift blame from organisations to individual actors? To what extent can it function punitively? And how well does it apply across different industrial and regulatory contexts? I argue that there are significant limitations to the notion that a Hippocratic Oath-like mechanism could effectively prevent data harms. Primarily, this is because the assignment of moral responsibility in an oath focuses on individuals over institutions and assumes that an agentic and altruistic form of individual decision making is possible in privatised, corporate, or governmental settings to the same extent as it is in medicine.

The ethics of social-mediated communication in COVID-related CSR
Professor Maureen Taylor | Strategic Communication

This presentation provides a digital communication ethical framework to understand companies’ social-mediated corporate social responsibility (CSR) messaging. Our framework draws insights from research on dialogic communication and engagement and recent business ethics scholarship on surveillance capitalism. This framework proposes that dialogic engagement and surveillance capitalism represent two different attitudes towards publics. While dialogic engagement encourages companies to embrace both opportunities and risks in an effort to co-create better relational outcomes for and with their publics, surveillance capitalism seeks to use algorithms to avoid risks and maximize gains in their strategic CSR communication and activities. This research analyzed a longitudinal dataset on how U.S. Fortune 500 companies communicated about their COVID-19 related CSR efforts on social media. Our results overwhelmingly suggest that surveillance capitalism may be more prominent among U.S. Fortune 500s’ CSR communication than dialogic communication. Ethical and practical implications are discussed to provide a way forward for organisations to better use social media in ethical CSR communication.

Discussion: Professor Saba Bebawi

Prof. Bebawi will respond to the four presentations and then facilitate questions from the audience