The FATE System Iterated

Fair, Transparent and Explainable Decision Making in a Juridical Case

More Info
expand_more

Abstract

The goal of the FATE system is decision support with use of state-of-the-art human-AI co-learning, explainable AI and fair, secure and privacy-preserving usage of data. This AI-based support system is a general system, in which the modules can be tuned to specific use cases. The FATE system is designed to address different user roles, such as a researcher, domain expert/consultant and subject/patient, each with their own requirements. Having examined a Diabetes Type 2 use case before, in this paper we slightly iterate the FATE system and focus on a juridical use case. For a given new juridical case the relevant older court cases are suggested by the system. The relevant older cases can be explained using the eXplainable AI (XAI) module, and the system can be improved based on feedback about the relevant cases using the Co-learning module through interaction with a user. In the Bias module, the use of the system is investigated for potential bias by inspecting the properties of suggested cases. Secure Learning offers privacy-by-design alternatives for functionality found in the aforementioned modules. These results show how the generic FATE system can be implemented in a number of real-world use cases. In future work we plan to explore more use cases within this system.