Print Email Facebook Twitter Comparison of the usage of Fairness Toolkits amongst practitioners:AIF360 and Fairlearn Title Comparison of the usage of Fairness Toolkits amongst practitioners:AIF360 and Fairlearn Author Pandey, Harshitaa (TU Delft Electrical Engineering, Mathematics and Computer Science) Contributor Gadiraju, Ujwal (mentor) Broz, F. (graduation committee) Balayn, A.M.A. (mentor) Yang, J. (mentor) Degree granting institution Delft University of Technology Programme Computer Science and Engineering Project CSE3000 Research Project Date 2022-06-24 Abstract Machine learning is still one of the most rapidly growing fields, and is used in a variety of different sectors such as education, healthcare, financial modeling etc(Jordan and Mitchell 2015). However, along with this demand for machine learning algorithms, there comes a need for ensuring that these algorithms are fair and contain little to no bias. Tools like Fairlearn and AI Fairness 360(AIF360) allows developers and data scientists to examine their codebase according to specified fairness metrics and mitigate any fairness related issues. This study aims to determine how practitioners use the separate toolkits and whether their practices are differ by the toolkit they choose to use. To do this, we conducted 29 think-aloud interviews with industry practitioners to understand how they would use Fairlearn and AIF360 in practice. The results show that fairness is a socio-technical challenge. While the toolkit does allow for participants to be structured in their approach, and raise awareness for fairness related harms, at the end of the day the fairness toolkit only provides technical help to find harms that the individual was already aware about. Based on the findings, we then suggest the design for a fairness toolkit that can help practitioners approach fairness in the most ideal manner. This toolkit would include a way to have interdisciplinary collaboration, have a larger focus on explainability, and give clear guidance to its users regarding fairness related harms. Subject Machine learningFairlearnFairness ToolkitsAIF360 To reference this document use: http://resolver.tudelft.nl/uuid:4ef11035-2f60-436f-85f9-7b9bed73b66d Part of collection Student theses Document type bachelor thesis Rights © 2022 Harshitaa Pandey Files PDF combinepdf_1_.pdf 751.07 KB Close viewer /islandora/object/uuid:4ef11035-2f60-436f-85f9-7b9bed73b66d/datastream/OBJ/view