SPATIAL
Practical AI Trustworthiness with Human Oversight
Abdul-Rasheed Ottun (University of Tartu)
Rasinthe Marasinghe (University of Tartu)
Toluwani Elemosho (University of Tartu)
Mohan Liyanage (University of Tartu)
Ashfaq Hussain Ahmed (University of Tartu)
Michell Boerger (Fraunhofer Institute for Open Communication Systems)
Chamara Sandeepa (University College Dublin)
Thulitha Senevirathna (University College Dublin)
Aaron Yi Ding (TU Delft - Information and Communication Technology)
undefined More Authors
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
We demonstrate SPATIAL, a proof-of-concept system that augments modern applications with capabilities to analyze trustworthy properties of AI models. The practical analysis of trustworthy properties is key to guaranteeing the safety of users and overall society when interacting with AI -driven applications. SPATIAL implements AI dashboards to introduce human-in-the-loop capabilities for the construction of AI models. SPATIAL allows different stakeholders to obtain quantifiable insights that characterize the decision making process of AI. This information can then be used by the stakeholders to comprehend possible issues that influence the performance of AI models, such that the issues can be resolved by human operators. Through rigorous benchmarks and experiments in a real-world industrial application, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness. However, this, in turn, increases the complexity of developing and maintaining the systems implementing AI. Our work paves the way towards augmenting modern applications with trustworthy AI mechanisms and human oversight approaches.