Adapting Explainable AI methods for multi-target tasks

Addressing challenges and Inter-Keypoint dependencies in Cricket Pose Analysis

Bachelor Thesis (2025)
Author(s)

A.M. Semov (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Ujwal Gadiraju – Mentor (TU Delft - Web Information Systems)

D. Zhan – Mentor (TU Delft - Web Information Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
27-06-2025
Awarding Institution
Delft University of Technology
Project
CSE3000 Research Project
Programme
Computer Science and Engineering
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Pose estimation models predict multiple interdependent body keypoints, making them a prototypical example of multi-target tasks in machine learning. While existing explainable AI (XAI) techniques have advanced our ability to interpret model outputs in single-target domains, their application to structured outputs remains underdeveloped. This work investigates how XAI methods can be adapted to explain pose estimation models, particularly in the context of cricket shot analysis. Guided by three research questions, we identify key challenges such as capturing inter-keypoint dependencies and providing interpretable explanations of structured outputs. We analyze both geometric and heatmap-level behavior of a pretrained pose estimation model to distinguish between two cricket shots - the pull and the cover drive. Through techniques like cosine similarity on heatmaps and polynomial trajectory modeling, we reveal how the model internally differentiates between similar motion patterns. Our framework introduces novel techniques for inter-keypoint explanation, contributes domain-specific insights into model behavior, and demonstrates the feasibility of interpretable structured predictions in high-dimensional, real-world tasks.

Files

Research_paper_4_.pdf
(pdf | 1.73 Mb)
License info not available