Reduce model unfairness with maximal-correlation-based fairness optimization
W. Huang (TU Delft - Electrical Engineering, Mathematics and Computer Science)
M Loog – Mentor (TU Delft - Pattern Recognition and Bioinformatics)
Jan van Van Gemert – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)
Christoph Lofi – Graduation committee member (TU Delft - Web Information Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Supervised machine learning is a growing assistive framework for professional decision-making. Yet bias that causes unfair discrimination has already been presented in the datasets. This research proposes a method to reduce model unfairness during the machine learning training process without altering the sample value or the prediction value. Using an objective function that identifies the biased feature with maximal correlation estimation, the method selects samples to train the updated classifier model. The quality of the sample selection determines the extent of unfairness reduction. With an adequate sample size, we demonstrate that the method is valid in reducing model unfairness without severely sacrificing classification accuracy. We tested our method on multiple benchmark datasets with demographic parity and feature independence as the notions for a statistically fair classification model.