RRAM-based fault-tolerant Binary Neural Networks

Master Thesis (2021)
Author(s)

A. Zografou (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Said Hamdioui – Mentor (TU Delft - Quantum & Computer Engineering)

Rajendra Bishnoi – Graduation committee member (TU Delft - Computer Engineering)

Anteneh Gebregiorgis – Graduation committee member (TU Delft - Computer Engineering)

René Leuken – Graduation committee member (TU Delft - Signal Processing Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2021 Artemis Zografou
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Artemis Zografou
Graduation Date
27-07-2021
Awarding Institution
Delft University of Technology
Programme
Computer Engineering
Related content

Github repository for the thesis code.

https://github.com/artezg/Fault-Tolerant-RRAM-BNN.git
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Computation-In-Memory (CIM) employing Resistive-RAM
(RRAM)-based crossbar arrays is a promising solution to implement Neural Networks (NNs) on hardware, such that they are efficient with respect to consumption of energy, memory, computational resources, and computation time. In this respect, Binary NNs (BNNs), where the weights obtain single binary values, are inherently suitable for cost-effective CIM-based NN implementations. However, RRAM devices, due to variability and reliability issues, restrict the applicability of CIM-based NN. To address this issue and towards a low-cost NN hardware realization, in this thesis, we: a) thoroughly investigate the impact of RRAM faults on the inference accuracy of RRAM-based BNNs, and b) propose three complementary fault-tolerance techniques to mitigate the impact of RRAM faults on the BNN's accuracy. These techniques are namely: a) a fault-tolerant activation function, b) a redundancy and weight range adjustment scheme and c) a retraining technique. Evaluation results compiled on the MNIST, Fashion-MNIST, and CIFAR-10 datasets demonstrate that the proposed techniques can improve the inference accuracy in the presence of RRAM faults by up to 20%, 40%, and 80%, respectively. Moreover, comparisons with certain related state-of-the-art fault-tolerance frameworks indicate that the proposed techniques yield competitive results.

Files

License info not available