Efficient Neural Network Architecture Search

Master Thesis (2019)
Author(s)

M. YANG (TU Delft - Mechanical Engineering)

Contributor(s)

W. Pan – Mentor (TU Delft - Robot Dynamics)

H. Zhou – Mentor (TU Delft - Robot Dynamics)

D.M. Gavrila – Graduation committee member (TU Delft - Intelligent Vehicles)

Raf Van de Plas – Graduation committee member (TU Delft - Team Raf Van de Plas)

Faculty
Mechanical Engineering
Copyright
© 2019 MINGHAO YANG
More Info
expand_more
Publication Year
2019
Language
English
Copyright
© 2019 MINGHAO YANG
Graduation Date
05-07-2019
Awarding Institution
Delft University of Technology
Programme
['Mechanical Engineering | Vehicle Engineering']
Faculty
Mechanical Engineering
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

One-Shot Neural Architecture Search (NAS) is a promising method to significantly reduce search time without any separate training. It can be treated as a Network Compression problem on the architecture parameters from an overparameterized network. However, there are two issues associated with most one-shot NAS methods. First, dependencies between a node and its predecessors and successors are often disregarded which result in improper treatment over zero operations. Second, architecture parameters pruning based on their magnitude is questionable. In this thesis, classic Bayesian learning approach is applied to alleviate these two issues. Unlike other NAS methods, we train the over-parameterized network for only one epoch before update network architecture. Impressively, this enabled us to find the optimal architecture in both proxy and proxyless tasks on CIFAR-10 within only 0.2 GPU days using a single GPU. As a byproduct, our approach can be transferred directly to convolutional neural networks compression by enforcing structural sparsity that is able to achieve extremely sparse networks without accuracy deterioration.

Files

Master_Thesis.pdf
(pdf | 2.37 Mb)
License info not available