Bias Detection and Generalization in AI Algorithms on Edge for Autonomous Driving

Conference Paper (2023)
Author(s)

D. Katare (TU Delft - Information and Communication Technology)

Nicolas Kourtellis (Telefónica Research)

Souneil Park (Telefónica Research)

Diego Perino (Telefónica Research)

M.F.W.H.A. Janssen (TU Delft - Engineering, Systems and Services)

Aaron Yi Ding (TU Delft - Information and Communication Technology)

Research Group
Information and Communication Technology
Copyright
© 2023 D. Katare, Nicolas Kourtellis, Souneil Park, Diego Perino, M.F.W.H.A. Janssen, Aaron Yi Ding
DOI related publication
https://doi.org/10.1109/SEC54971.2022.00050
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 D. Katare, Nicolas Kourtellis, Souneil Park, Diego Perino, M.F.W.H.A. Janssen, Aaron Yi Ding
Research Group
Information and Communication Technology
Pages (from-to)
342-348
ISBN (electronic)
9781665486118
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

A machine learning model can often produce biased outputs for a familiar group or similar sets of classes during inference over an unknown dataset. The generalization of neural networks have been studied to resolve biases, which has also shown improvement in accuracy and performance metrics, such as precision and recall, and refining the dataset's validation set. Data distribution and instances included in test and validation-set play a significant role in improving the generalization of neural networks. For producing an unbiased AI model, it should not only be trained to achieve high accuracy and minimize false positives. The goal should be to prevent the dominance of one class/feature over the other class/feature while calculating weights. This paper investigates state-of-art object detection/classification on AI models using metrics such as selectivity score and cosine similarity. We focus on perception tasks for vehicular edge scenarios, which generally include collaborative tasks and model updates based on weights. The analysis is performed using cases that include the difference in data diversity, the viewpoint of the input class and combinations. Our results show the potential of using cosine similarity, selectivity score and invariance for measuring the training bias, which sheds light on developing unbiased AI models for future vehicular edge services.

Files

Bias_Detection_and_Generalizat... (pdf)
(pdf | 1.07 Mb)
- Embargo expired in 01-07-2023
License info not available