Calibrating Human-AI Collaboration: Impact of Risk, Ambiguity and Transparency on Algorithmic Bias - Machine Learning and Knowledge Extraction Access content directly
Conference Papers Year : 2020

Calibrating Human-AI Collaboration: Impact of Risk, Ambiguity and Transparency on Algorithmic Bias

Philipp Schmidt
  • Function : Author
  • PersonId : 1115813
Felix Biessmann
  • Function : Author

Abstract

Transparent Machine Learning (ML) is often argued to increase trust into predictions of algorithms however the growth of new interpretability approaches is not accompanied by a growth in studies investigating how interaction of humans and Artificial Intelligence (AI) systems benefits from transparency. The right level of transparency can increase trust in an AI system, while inappropriate levels of transparency can lead to algorithmic bias. In this study we demonstrate that depending on certain personality traits, humans exhibit different susceptibilities for algorithmic bias. Our main finding is that susceptibility to algorithmic bias significantly depends on annotators’ affinity to risk. These findings help to shed light on the previously underrepresented role of human personality in human-AI interaction. We believe that taking these aspects into account when building transparent AI systems can help to ensure more responsible usage of AI systems.
Fichier principal
Vignette du fichier
497121_1_En_24_Chapter.pdf (489.63 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03414725 , version 1 (04-11-2021)

Licence

Attribution

Identifiers

Cite

Philipp Schmidt, Felix Biessmann. Calibrating Human-AI Collaboration: Impact of Risk, Ambiguity and Transparency on Algorithmic Bias. 4th International Cross-Domain Conference for Machine Learning and Knowledge Extraction (CD-MAKE), Aug 2020, Dublin, Ireland. pp.431-449, ⟨10.1007/978-3-030-57321-8_24⟩. ⟨hal-03414725⟩
47 View
205 Download

Altmetric

Share

Gmail Facebook X LinkedIn More