Automated malaria diagnosis using convolutional neural networks in an on-field setting

The analysis of low quality smartphone based microscope images

Master Thesis (2018)
Author(s)

R. Sorgedrager (TU Delft - Mechanical Engineering)

Contributor(s)

M.H.G. Verhaegen – Mentor

Temitope Agbana – Graduation committee member

Laurens Bliek – Graduation committee member

Gleb Vdovin – Graduation committee member

Simone Baldi – Graduation committee member

J. Kober – Graduation committee member

Faculty
Mechanical Engineering
Copyright
© 2018 Riemer Sorgedrager
More Info
expand_more
Publication Year
2018
Language
English
Copyright
© 2018 Riemer Sorgedrager
Graduation Date
23-01-2018
Awarding Institution
Delft University of Technology
Project
['OSMD']
Programme
['Mechanical Engineering | Systems and Control']
Faculty
Mechanical Engineering
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This study focuses on automated malaria diagnosis in low quality blood smear images, captured by a low-cost smartphone based microscope system. The aim is to localize and classify the healthy and infected erythrocytes (red blood cells) in order to evaluate the parasitaemia in an infected blood smear. Due to the lower quality of the smartphone microscope system compared to traditional high-end light microscopes, conventional algorithms fail to process these images. We propose a framework using a convolutional neural network as a pixel classifier to localize the erythrocytes. Afterwards we classify them accordingly, using a convolutional neural network as an object classifier. Such a system can offer in the-field malaria diagnosis without human intervention or can act as an aid for human experts to lower workload and increase diagnosis accuracy. The algorithm successfully localizes the erythrocytes with an average sensitivity of 97.31% and precision of 92.21%. Classification performed inadequate, in terms of low agreement with two human experts. This can be due to the low image quality or the small amount of training data available at the time.

Files

License info not available
Appendix_A.pdf
(pdf | 2.15 Mb)
License info not available