Explainable artificial intelligence for intrusion detection in IoT networks

A deep learning based approach

More Info
expand_more

Abstract

The Internet of Things (IoT) is currently seeing tremendous growth due to new technologies and big data. Research in the field of IoT security is an emerging topic. IoT networks are becoming more vulnerable to new assaults as a result of the growth in devices and the production of massive data. In order to recognize the attacks, an intrusion detection system is required. In this work, we suggested a Deep Learning (DL) model for intrusion detection to categorize various attacks in the dataset. We used a filter-based approach to pick out the most important aspects and limit the number of features, and we built two different deep-learning models for intrusion detection. For model training and testing, we used two publicly accessible datasets, NSL-KDD and UNSW-NB 15. First, we applied the dataset on the Deep neural network (DNN) model and then the same dataset on Convolution Neural Network (CNN) model. For both datasets, the DL model had a better accuracy rate. Because DL models are opaque and challenging to comprehend, we applied the idea of explainable Artificial Intelligence (AI) to provide a model explanation. To increase confidence in the DNN model, we applied the explainable AI (XAI) Local Interpretable Model-agnostic Explanations (LIME ) method, and for better understanding, we also applied Shapley Additive Explanations (SHAP).