SPATIAL

Practical AI Trustworthiness with Human Oversight

Conference Paper (2024)
Author(s)

Abdul-Rasheed Ottun (University of Tartu)

Rasinthe Marasinghe (University of Tartu)

Toluwani Elemosho (University of Tartu)

Mohan Liyanage (University of Tartu)

Ashfaq Hussain Ahmed (University of Tartu)

Michell Boerger (Fraunhofer Institute for Open Communication Systems)

Chamara Sandeepa (University College Dublin)

Thulitha Senevirathna (University College Dublin)

Aaron Yi Ding (TU Delft - Information and Communication Technology)

undefined More Authors

Research Group
Information and Communication Technology
DOI related publication
https://doi.org/10.1109/ICDCS60910.2024.00138 Final published version
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Information and Communication Technology
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
Pages (from-to)
1427-1430
ISBN (print)
979-8-3503-8606-6
ISBN (electronic)
979-8-3503-8605-9
Event
44th IEEE International Conference on Distributed Computing Systems (2024-07-23 - 2024-07-26), Jersey City, United States
Downloads counter
235
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We demonstrate SPATIAL, a proof-of-concept system that augments modern applications with capabilities to analyze trustworthy properties of AI models. The practical analysis of trustworthy properties is key to guaranteeing the safety of users and overall society when interacting with AI -driven applications. SPATIAL implements AI dashboards to introduce human-in-the-loop capabilities for the construction of AI models. SPATIAL allows different stakeholders to obtain quantifiable insights that characterize the decision making process of AI. This information can then be used by the stakeholders to comprehend possible issues that influence the performance of AI models, such that the issues can be resolved by human operators. Through rigorous benchmarks and experiments in a real-world industrial application, we demonstrate that SPATIAL can easily augment modern applications with metrics to gauge and monitor trustworthiness. However, this, in turn, increases the complexity of developing and maintaining the systems implementing AI. Our work paves the way towards augmenting modern applications with trustworthy AI mechanisms and human oversight approaches.

Files

SPATIAL_Practical_AI_Trustwort... (pdf)
(pdf | 0.724 Mb)
- Embargo expired in 22-02-2025
License info not available