Methods for Mitigating Gender Bias in Binary Classification Models – A Comparative Analysis
Abstract
Inequality is one of the problems of the modern world. Discrimination of various kinds can affect many areas of life. The growing importance of data in the modern world makes it all the more important to ensure that the methods used to analyze it do not return results in which unfairness is present. Unfortunately, there may be situations where there is unfairness in the predictions of machine learning models. Over the years, researchers in this field of artificial intelligence have developed methods for mitigating bias in models. The purpose of this article is to identify gender bias in selected dataset and compare which of the selected solutions achieves the best result while used for mitigating impact of this type of unfairness on model’s predictions. The following research methods were used: literature review, experiment and comparative analysis. The evaluation of methods will be based on the value of measures: disparity in recall and disparity in selection rate for the column containing information about the person’s gender. The values of these measures, achieved by binary classification models in which different methods for mitigating bias were implemented, will be compared in order to identify which of the methods is best suited for mitigating gender bias in binary classification models.