RobustDA

Lightweight Robust Domain Adaptation for Evolving Data at Edge

Journal Article (2024)
Author(s)

Xinyu Guo (Beijing Institute of Technology)

Xiaojiang Zuo (Beijing Institute of Technology)

Rui Han (Beijing Institute of Technology)

Junyan Ouyang (Beijing Institute of Technology)

Jing Xie (Beijing Institute of Technology)

Chi Harold Harold Liu (Beijing Institute of Technology)

Qinglong Zhang (Beijing Institute of Technology)

Ying Guo (Qilu University of Technology)

Jing Chen (Qilu University of Technology)

Lydia Y. Chen (TU Delft - Data-Intensive Systems)

Research Group
Data-Intensive Systems
DOI related publication
https://doi.org/10.1109/JETCAS.2024.3478359
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Data-Intensive Systems
Issue number
4
Volume number
14
Pages (from-to)
688-704
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

AI applications powered by deep learning models are increasingly run natively at edge. A deployed model not only encounters continuously evolving input distributions (domains) but also faces adversarial attacks from third-party. This necessitates adapting the model to shifting domains to maintain high natural accuracy, while avoiding degrading the model's robust accuracy. However, existing domain adaptation and adversarial attack preventation techniques often have conflicting optimization objectives and they rely on time-consuming training process. This paper presents RobustDA, an on-device lightweight approach that co-optimizes natural and robust accuracies in model retraining. It uses a set of low-rank adapters to retain all learned domains' knowledge with small overheads. In each model retraining, RobustDA constructs an adapter to separate domain-related and robust-related model parameters to avoid their conflicts in updating. Based on the retained knowledge, it quickly generates adversarial examples with high-quality pseudo-labels and uses them to accelerate the retraining process. We demonstrate that, comparing against 14 state-of-the-art DA techniques under 7 prevalent adversarial attacks on edge devices, the proposed co-optimization approach improves natural and robust accuracies by 6.34% and 11.41% simultaneously. Under the same accuracy, RobustDA also speeds up the retraining process by 4.09x.

Files

RobustDA_Lightweight_Robust_Do... (pdf)
(pdf | 6.48 Mb)
- Embargo expired in 21-04-2025
License info not available