From discrimination to disinformation: What it means to investigate AI harms of today

Pia Sombetzki, Likhita Banerji, Naomi Appelman

Summary
Civil society actors and journalists have been instrumental in uncovering many of the current-day harms of AI, from biased welfare algorithms to disinformation spread through generative AI. Let's take a moment to take stock as well as explore the difficulties that come along the way.
Conversation
English
Conference

How far should we go when it comes to integrating Artificial Intelligence (AI) and automated decision-making systems into our daily lives? As usual in heated debates, there's no lack of opinions on the matter.

The need for systematic and independent investigations of new applications that turn entire sectors upside down is clearly there. Not only do such investigations provide interesting insights, they also serve society as a whole. Because only on the basis of what we know, our political leaders and we as a society can take decisions in the public interest – both in the near and distant future.

In this session, we will take a look at different approaches to investigate the impact of AI in areas where fundamental rights and democracy are at risk. From testing applications and systems from the inside out, over talking to affected people on the ground to investigating journalistically. What can we hope to find out using which approach? Where do we encounter obstacles? And why should we still care about investigating harms when getting access to information is always cumbersome? 

Dies ist ein Portraitbild von Pia Sombetzki. Sie trägt darauf ein blaues Oberteil mit V-Ausschnitt, lange Haare, eine Brille und lächelt mit geschlossenem Mund.
Policy & Advocacy Manager
Photo of Likhita Banerji
Head of the Algorithmic Accountability Lab, Amnesty Tech