Time's Up!
Robust Watermarking in Large Language Models for Time Series Generation
N.J.I. van Schaik (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Lydia Y. Chen – Mentor (TU Delft - Data-Intensive Systems)
C. Zhu – Mentor (TU Delft - Data-Intensive Systems)
J.M. Galjaard – Mentor (TU Delft - Data-Intensive Systems)
R. Hai – Graduation committee member (TU Delft - Web Information Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
The advent of pretrained probabilistic time series foundation models has significantly advanced the field of time series forecasting. Despite these models’ growing popularity, the application of watermarking techniques to them remains underexplored. This paper addresses this research gap by benchmarking several widely used watermarking methods to time series models and by introducing a novel watermarking technique named HTW (Heads Tails Watermark). Unlike traditional probabilistic watermarking approaches, HTW uses a pseudo-random function to directly embed a signal into the numeric structure of the series, thereby greatly enhancing its robustness against potential attacks. Comprehensive experiments and evaluations reveal that on average, HTW retains 98.4% prediction accuracy, significantly outperforming other conventional LLM watermarks. Furthermore, HTW demonstrates robust performance with an average z-score of 5.28 across various datasets and attack scenarios for a series length of 48. These findings establish HTW as a superior alternative for securing pretrained probabilistic time series foundation models