Design guidelines to protect stakeholders’ values in AI systems

Based on a use case situated in the Japanese life insurance industry

More Info
expand_more

Abstract

Integrating artificial intelligence (AI) into Japan's life insurance sector marks a significant move towards data-centric precision, reflecting the nation's shift towards Society 5.0. In this domain, AI is revolutionizing decision-making processes and enhancing operational efficiency, with applications ranging from fraud detection in credit card systems to predictive underwriting. While AI offers notable benefits, it also introduces risks, including privacy breaches, algorithmic biases, and inadequate human supervision. To tackle these risks, the Japanese government has initiated guidelines for societal protection, but gaps remain in the insurance industry's implementation, especially in translating social norms into the industry's context to protect stakeholders' values.

The industry needs a guide for safely designing, developing, and deploying AI systems, considering stakeholders' perspectives. This guide fills two knowledge gaps: a framework for translating high-level values into Japanese life insurance industry requirements and an initial process for converting these high-level values into organizational guidelines.

An empirical study on predictive underwriting informed the research, identifying 13 values and four informal social institutions for the AI design process. It involved eight experts who defined 54 norms, which were later refined and categorized into process and assessment norms focusing on data and AI.

The result is ten design guidelines for AI system developers, which are validated by experts, addressing the full AI lifecycle. These guidelines contribute scientifically by introducing an initial process combining design for values with system safety concepts, reporting standardization, and AI governance frameworks.
Future research should replicate this process in various contexts, reevaluate the value framework with broader stakeholder inputs, investigate the dynamics between Japanese society and AI in more detail, and delve deeper into system theoretic hazards analysis. This approach promises to strengthen the value framework and process applicability in different organizational settings.

Files