Situating Explainable AI in the socio-technical context

A system safety inspired approach to operationalizing explainability

More Info
expand_more

Abstract

Explainable AI is the field concerned with trying to make AI understandable to humans. While efforts have resulted in significant improvement in research and practical methods of Explainable AI, there is an urgent need for additional research and empirical studies. The academic research gaps identified in this thesis show that Explainable AI is still in its infancy and is mostly approached with a technocentric perspective while not being focused on the audience the explainability is actually intended for. Next to this, there is no structured approach to defining and establishing explainability in dynamic complex systems that involve people, institutional, and organizational elements. Lastly, there are limited empirical studies that investigate the needs, usage, and risk of explainability in complex systems.

The research tries to define and address explainability in the socio-technical context within the Machine Learning decision support systems of Transaction Monitoring. It does so by performing an extensive literature review and by conducting semi-structured that try to collect empirical knowledge within local practice. The goal of this research is to expand the definitions and view on explainability by incorporating the social, organizational, and institutional elements that influence explainability. Next, the goal is to develop a method that can help practitioners approach explainability in a structured manner taking the audience into account while applying this socio-technical perspective.

The user-centered method for operationalizing explainability takes on a socio-technical perspective and can provide requirements for design choices, in addition to this the method shows how these requirements can be satisfied and controlled by instantiating control structures. The method has been demonstrated and evaluated within the bank by providing a workshop using a Toy Case during a focus group. Practitioners experienced the method as useful and actionable, also the method provides broader perspectives and insights on explainability and invites the discussion of dilemmas and questions. The practitioners added that the method could be further refined by focusing on additional guidance on the control structure because the method assumes prerequisite knowledge of systems theory and system safety theory.