Evidence Humans Provide When Explaining Data-Labeling Decisions - Human-Computer Interaction Access content directly
Conference Papers Year : 2019

Evidence Humans Provide When Explaining Data-Labeling Decisions

Judah Newman
  • Function : Author
  • PersonId : 1068624
Bowen Wang
  • Function : Author
  • PersonId : 1068625
Valerie Zhao
  • Function : Author
  • PersonId : 1068626
Amy Zeng
  • Function : Author
  • PersonId : 1068627
Michael L. Littman
  • Function : Author
  • PersonId : 1068628
Blase Ur
  • Function : Author
  • PersonId : 1068629

Abstract

Because machine learning would benefit from reduced data requirements, some prior work has proposed using humans not just to label data, but also to explain those labels. To characterize the evidence humans might want to provide, we conducted a user study and a data experiment. In the user study, 75 participants provided classification labels for 20 photos, justifying those labels with free-text explanations. Explanations frequently referenced concepts (objects and attributes) in the image, yet 26% of explanations invoked concepts not in the image. Boolean logic was common in implicit form, but was rarely explicit. In a follow-up experiment on the Visual Genome dataset, we found that some concepts could be partially defined through their relationship to frequently co-occurring concepts, rather than only through labeling.
Fichier principal
Vignette du fichier
488593_1_En_22_Chapter.pdf (3.15 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

hal-02553853 , version 1 (24-04-2020)

Licence

Attribution

Identifiers

Cite

Judah Newman, Bowen Wang, Valerie Zhao, Amy Zeng, Michael L. Littman, et al.. Evidence Humans Provide When Explaining Data-Labeling Decisions. 17th IFIP Conference on Human-Computer Interaction (INTERACT), Sep 2019, Paphos, Cyprus. pp.390-409, ⟨10.1007/978-3-030-29387-1_22⟩. ⟨hal-02553853⟩
34 View
24 Download

Altmetric

Share

Gmail Facebook X LinkedIn More