Are BERT-based fact-checking models robust against adversarial attack?

Bachelor Thesis (2023)
Author(s)

E.E. Afriat (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Avishek Anand – Graduation committee member (Leibniz Universität)

Lijun Lyu – Mentor (L3S)

L. Corti – Mentor (TU Delft - Web Information Systems)

URL related publication
https://github.com/somePersone/HotFlip-for-Expred
More Info
expand_more
Publication Year
2023
Language
English
Graduation Date
03-02-2023
Awarding Institution
Project
CSE3000 Research Project
Programme
Computer Science and Engineering
Related content

Hotflip implementation

https://github.com/somePersone/HotFlip-for-Expred
Downloads counter
277
Collections
thesis
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We seek to examine the vulnerability of BERT-based fact-checking. We implement a gradient based, adversarial attack strategy, based on Hotflip swapping individual tokens from the input. We use this on a pre-trained ExPred model for fact-checking. We find that gradient based adversarial attacks are ineffective against ExPred. Uncertainties about the similitude of the examples generated by our adversarial attack implementation cast doubts on the results.

Files

Main.pdf
(pdf | 0.308 Mb)
License info not available