Searched for: +
(1 - 1 of 1)
document
Zhao, Zhendong (author), Chen, Xiaojun (author), Xuan, Yuexin (author), Dong, Ye (author), Wang, Dakui (author), Liang, K. (author)
Backdoor attack is a type of serious security threat to deep learning models. An adversary can provide users with a model trained on poisoned data to manipulate prediction behavior in test stage using a backdoor. The backdoored models behave normally on clean images, yet can be activated and output incorrect prediction if the input is stamped...
conference paper 2022