RRAM-based fault-tolerant Binary Neural Networks

More Info
expand_more

Abstract

Computation-In-Memory (CIM) employing Resistive-RAM
(RRAM)-based crossbar arrays is a promising solution to implement Neural Networks (NNs) on hardware, such that they are efficient with respect to consumption of energy, memory, computational resources, and computation time. In this respect, Binary NNs (BNNs), where the weights obtain single binary values, are inherently suitable for cost-effective CIM-based NN implementations. However, RRAM devices, due to variability and reliability issues, restrict the applicability of CIM-based NN. To address this issue and towards a low-cost NN hardware realization, in this thesis, we: a) thoroughly investigate the impact of RRAM faults on the inference accuracy of RRAM-based BNNs, and b) propose three complementary fault-tolerance techniques to mitigate the impact of RRAM faults on the BNN's accuracy. These techniques are namely: a) a fault-tolerant activation function, b) a redundancy and weight range adjustment scheme and c) a retraining technique. Evaluation results compiled on the MNIST, Fashion-MNIST, and CIFAR-10 datasets demonstrate that the proposed techniques can improve the inference accuracy in the presence of RRAM faults by up to 20%, 40%, and 80%, respectively. Moreover, comparisons with certain related state-of-the-art fault-tolerance frameworks indicate that the proposed techniques yield competitive results.