Enhancing Diabetes Care through AI-Driven Lie Detection in a Diabetes Support System
Testing the validity of lie detection using an SVM model trained on linguistic cues
R.L.T. van Westerlaak (TU Delft - Electrical Engineering, Mathematics and Computer Science)
C.M. Jonker – Mentor (TU Delft - Interactive Intelligence)
A. Anand – Graduation committee member (TU Delft - Web Information Systems)
J.D. Top – Mentor (TU Delft - Interactive Intelligence)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This paper presents a deception-detection module for a diabetes support system, addressing the challenge of unreliable patient self-reporting and ultimately attempting to improve diabetes care. The research is for a system called CHIP developed by the Hybrid Intelligence project group and TNO. Linguistic cues, such as motion verbs, negation terms, and exclusive terms were identified through a literature study and encoded using custom dictionaries. Cue detection was implemented using the SpaCy NLP library, which identifies and counts cue occurrences. A stylometric machine learning approach was favored over LLMs for explainability and scientific substantiation. In this research, an SVM model, selected for its alignment with prior research (the Mafiascum experiment), was trained on annotated Mafia game data, using normalized cue frequencies as features for the model. Although the SVM achieved high accuracy on truthful messages (F1 between 0.78–0.84), it performed poorly in detecting deception (F1 between 0.21–0.22), likely because of the high frequency of truthful input compared to deceptive input. The low accuracy, along with the model’s domain transferability and performance limitations, suggest further work is needed, particularly with context-specific data and possible integration with LLM-based approaches.