Explainable artificial intelligence for intrusion detection in IoT networks

A deep learning based approach

Journal Article (2024)
Author(s)

Bhawana Sharma (Manipal University Jaipur)

Lokesh Sharma (Manipal University Jaipur)

Chhagan Lal (TU Delft - Cyber Security)

Satyabrata Roy (Manipal University Jaipur)

Research Group
Cyber Security
Copyright
© 2024 Bhawana Sharma, Lokesh Sharma, C. Lal, Satyabrata Roy
DOI related publication
https://doi.org/10.1016/j.eswa.2023.121751
More Info
expand_more
Publication Year
2024
Language
English
Copyright
© 2024 Bhawana Sharma, Lokesh Sharma, C. Lal, Satyabrata Roy
Research Group
Cyber Security
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
Volume number
238
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The Internet of Things (IoT) is currently seeing tremendous growth due to new technologies and big data. Research in the field of IoT security is an emerging topic. IoT networks are becoming more vulnerable to new assaults as a result of the growth in devices and the production of massive data. In order to recognize the attacks, an intrusion detection system is required. In this work, we suggested a Deep Learning (DL) model for intrusion detection to categorize various attacks in the dataset. We used a filter-based approach to pick out the most important aspects and limit the number of features, and we built two different deep-learning models for intrusion detection. For model training and testing, we used two publicly accessible datasets, NSL-KDD and UNSW-NB 15. First, we applied the dataset on the Deep neural network (DNN) model and then the same dataset on Convolution Neural Network (CNN) model. For both datasets, the DL model had a better accuracy rate. Because DL models are opaque and challenging to comprehend, we applied the idea of explainable Artificial Intelligence (AI) to provide a model explanation. To increase confidence in the DNN model, we applied the explainable AI (XAI) Local Interpretable Model-agnostic Explanations (LIME ) method, and for better understanding, we also applied Shapley Additive Explanations (SHAP).

Files

1_s2.0_S0957417423022534_main.... (pdf)
(pdf | 4.49 Mb)
- Embargo expired in 25-03-2024
License info not available