Bias in Face Image Classification Machine Learning Models: The Impact of Annotator’s Gender and Race
Abstract
An important factor that ensures the correct operation of Machine Learning models is the quality of data used during the model training process. Quite often, training data is annotated by humans, and as a result, annotation bias may be introduced. In this study, we focus on face image classification and aim to quantify the effect of annotation bias introduced by different groups of annotators, allowing in that way the understanding of the problems that arise due to annotation bias. The results of the experiments indicate that the performance of Machine Learning models in several face image interpretation tasks is correlated to the self-reported demographic characteristics of the annotators. In particular, we found significant correlation to annotator race, while correlation to gender is less profound. Furthermore, experimental results show that it is possible to determine the group of annotators involved in the annotation process by considering the annotation data provided by previously unseen annotators. The results emphasize the risks of annotation bias in Machine Learning models.