FlexHH: A flexible hardware library for Hodgkin-Huxley-based neural simulations
R.D. Miedema (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Z. Al-Ars – Mentor (TU Delft - Computer Engineering)
C Strydis – Mentor (TU Delft - Bio-Electronics)
Matthias Möller – Graduation committee member (TU Delft - Numerical Analysis)
G. Smaragdos – Graduation committee member (Erasmus MC)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
In the field of computational neuroscience, complex mathematical models are used to replicate brain behavior with the goal of understanding the biological processes involved. The simulation of such models are computationally expensive and therefore, in recent years, high-performance computing systems have been identified as a possible solution to accelerate their execution. However, most of those implementations are model-specific and thus non-reusable for other modeling efforts, requiring a completely new development effort per model used. The challenge lies in offering high-performance and scalable libraries (so as to support the construction and simulation of large-scale brain models) while at the same time offering high degrees of modeling flexibility and parameterization. This thesis presents flexHH, a scalable hardware library implementing five accelerated and highly parameterizable instances of the Hodgkin-Huxley neuron model, one of the most widely used biophysically-meaningful neuron representations. As a result, the user is able to instantiate custom models using flexHH and immediately take advantage of the acceleration without the mediation of the engineer. The five flexHH implementations target the Maxeler Data-Flow Engine(DFE), an FPGA-based acceleration solution, and incrementally support a number of features such as custom ion channels, multiple cell compartments and inter-neuron gap-junction connectivity. Furthermore, for each of the five implementations it is possible to select either the forward-Euler, second, or third-order Runge-Kutta numerical method. A speedup between 14x-36x has been achieved compared to a sequential C implementation, when run on a 2.5-GHz Intel Core-i7 CPU, while no practical performance drop is observed when compared to a hard-coded version of a DFE, an Intel Xeon-Phi CPU, and a NVidia Titan X GPU. In this thesis, flexHH kernels are rigorously validated, an evaluation of the influence of the numerical methods is done, and a comprehensive resources usage, performance, and power-consumption evaluation of the various DFE implementations is presented.