Assessing the Fairness of AI Recruitment systems

More Info
expand_more

Abstract

Businesses have leveraged Artificial Intelligence (AI) into many of their operational activities such as marketing, sales, and finance for its speed and cost-effectiveness. Lately, AI has also found applications in organizational recruitment processes. Unlike the conventional rule-based systems, present-day AI systems learn from data patterns—supported by the growing volumes of (big) data and increasing computing capacity—and make decisions independently without any human interventions. Thus, the perception that AI is fact-oriented and unbiased has led to this change in organizational recruitment practices. Though recent studies have shown that AI decisions could be unfair, scientific research on the fairness of AI recruitment systems is limited. This research fills this gap by designing a conceptual model to assist top-level HR managers in assessing the fairness of AI recruitment tools while drawing from information systems and responsible innovation literature.

Guided by Design Science Research (DSR), the development of the model entailed three cycles of research, i.e., relevance cycle (which focused on design environment), rigor cycle (which focused on the existing knowledge base), and design cycle (which focused on development and evaluation). The design environment was explored by reviewing the literature on fairness in recruitment and algorithmic biases. Understanding both the recruitment fairness and potential causes of unfairness in AI helped to define the goal of the conceptual model.

The design cycle was informed by the design principles for responsible AI, namely Accountability, Responsibility, and Transparency (ART), and General Data Protection Regulation (GDPR). The model presents seven dimensions which translate the principles to design requirements to assess the fairness of AI recruitment system. They are: (1)Justification; (2)Explanation; (3)Anticipation; (4)Reflexiveness; (5)Inclusion; (6)Responsiveness; and (7)Auditablity. The model also ties these concepts with specific criteria of conventional recruitment fairness such as consistency, interpersonal fairness, job-relatedness, and statistical parity. Finally, the completeness of the model was evaluated by discussing its alignment with other frameworks that had similar objective and utility of the model was validated by collecting feedback from the intended users.

This thesis project makes several scientific and practical contributions. The research discusses the potential risks of using AI in the context of HR recruitment systems thereby contributes to the limited literature available in this respect. By using the DSR methodology for building the assessment model, this research serves as a case for DSR methodology in designing a non-IS artifact. Furthermore, the thesis has unified scattered studies in recruitment justice to provide a comprehensive overview of the characteristics of a fair recruitment system.

Building on the theoretical contributions, the study has developed an assessment model to assist top-level HR managers in assessing the fairness of an AI recruitment tool. Employing this assessment tool can have positive effects on a business organization and society by eradicating the unfairness or bias that AI recruitment tools can bring into the organization. It would also raise awareness regarding the risks of AI. Given that the GDPR (article 35) mandate organizations to take responsibility in assessing the impact while introducing automated processing in new contexts or purposes, the assessment model designed in this study supports these regulations.