Incremental Hierarchical Learning using Radial Basis Function for Taxonomy based data

A Transfer Learning Implementation

More Info
expand_more

Abstract

Significant work has been done in the field of computer vision focusing on learning and clustering methods. The use of improved learning methods has paved a way forward for researches to explore various theories to improve existing methods. One among various learning methods is Hierarchical learning which has showed impressive benefits and performance over traditional sequential learning approaches. In general, machine learning models require a lot of data for every new scenario which is not always possible and if so, is very expensive. Transfer learning, which focuses on transferring knowledge across trained machine learning models, is a promising machine learning methodology for solving the above problem. In this thesis, we propose an end-to-end neural network architecture on the NM500 neuromorphic chip using an incremental hierarchical learning approach. We first design a hierarchical representation of a taxonomy, develop a batch of pre-classifiers and use their output to construct a custom feature vector that is the input to the front-end network which learns the taxonomy. In other words, the taxonomy is embedded in the clustering method and not trained by a backpropagation algorithm. The custom feature vector has been structured to accurately incorporate the taxonomy based on the Manhattan distance norm. The structure has been proven mathematically and validated using experiments. A Radial Basis Function (RBF) is used for learning and a combination of RBF and K-Nearest Neighbors (KNN) for classification. The applicability of the proposed framework has been demonstrated on a road sign classification problem which is represented as a taxonomy. The ability of the framework to incrementally learn new categories and update the taxonomy online has also been shown. Lastly, we show a case of transfer learning where the entire back- end networks is used as a starting point to learn new features without significantly forgetting prior knowledge. This transfer learning framework showed comparable performance to the standard learning method in terms of accuracy while using significantly less labelled data. This work paves a way forward for researchers to develop transfer learning frameworks and more importantly explore neuromorphic hardware for machine learning tasks.