ISU research highlights prevalence of unfairness in Machine Learning models

May 26, 2020
News

ISU Research Highlights Prevalence of Unfairness in Machine Learning Models

Recently, machine learning models are used increasingly in important decision-making that affects human lives. Many critical decisions are being made by machine learning based automated software such as approving bank loans, recommending criminal sentencing, hiring employees, and so on. Several such software has been reported to exhibit bias towards specific group of people. It is very important to ensure that predictions of these models are fair. No discrimination should be made to any societal groups or individuals depending on their protected attributes such as race, sex, age, religious belief, etc.

ISU Department of Computer Science researchers Sumon Biswas and Hridesh Rajan, are conducting research on the fairness issues of real-world machine learning models and their mitigation techniques. They collected top-rated models from a crowd sourced data science platform called Kaggle and studied the discriminations made by those models. They built a benchmark of such models and done a detailed evaluation of the bias using a comprehensive set of fairness metrics. Their results show that all the models are exhibiting bias in different forms. They pointed out that machine learning developers and library builders have to follow certain guidelines to avoid biases. They presented the software design ideas as fairness remedy to these models. Additionally, the study evaluated promising bias mitigation algorithms and done a comparative analysis. Last but not the least, the results show further impacts of different mitigation techniques. Their analysis reveals that there is often a trade-off between the performance and fairness of the models. The software developers should think more carefully while optimizing the models for accuracy since it can incur unfairness to the predictions.

Biswas and Rajan will present the results in the paper entitled “Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias? An Empirical Study on Model Fairness”. This paper has been accepted at the research track of the 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE). According to https://2020.esec-fse.org/, ESEC/FSE is an internationally renowned forum for researchers, practitioners, and educators to present and discuss the most recent innovations, trends, experiences, and challenges in the field of software engineering. ESEC/FSE brings together experts from academia and industry to exchange the latest research results and trends as well as their practical application in all areas of software engineering.” The research track has been very competitive with 28% acceptance rate. The conference will be held in Sacramento, California from November 8-13, 2020. The preprint of the paper is available at https://lab-design.github.io/papers/ESEC-FSE-20a/ml-fairness.pdf.

The authors have expressed their excitement on the acceptance of the paper in a top venue of the field. They are continuing the research to explore new avenues and assure the fairness of model decisions.

Category: