Tuberculosis Classification Using Deep Learning and FPGA Inferencing

Authors

  • Fazrul Faiz Zakaria Micro System Technology, Centre of Excellence (CoE), Universiti Malaysia Perlis (UniMAP), Perlis, Malaysia
  • Asral Bahari Jambek Micro System Technology, Centre of Excellence (CoE), Universiti Malaysia Perlis (UniMAP), Perlis, Malaysia
  • Norfadila Mahrom Sports Engineering Research Centre, Centre of Excellence (SERC), Universiti Malaysia Perlis, Perlis, Malaysia
  • Rafikha Aliana A Raof Sports Engineering Research Centre, Centre of Excellence (SERC), Universiti Malaysia Perlis, Perlis, Malaysia
  • Mohd Nazri Mohd Warip Advanced Computing, Centre of Excellence, Universiti Malaysia Perlis (UniMAP), Perlis, Malaysia
  • Phak Len Al Eh Kan Advanced Computing, Centre of Excellence, Universiti Malaysia Perlis (UniMAP), Perlis, Malaysia
  • Muslim Mustapa Programmable Solutions Group, Intel Malaysia

DOI:

https://doi.org/10.37934/araset.29.3.105114

Keywords:

Tuberculosis classification, Deep-learning, FPGA inferencing

Abstract

Among the top 10 leading causes of mortality, tuberculosis (TB) is a chronic lung illness caused by a bacterial infection. Due to its efficiency and performance, using deep learning technology with FPGA as an accelerator has become a standard application in this work. However, considering the vast amount of data collected for medical diagnosis, the average inference speed is inadequate. In this scenario, the FPGA speeds the deep learning inference process enabling the real-time deployment of TB classification with low latency. This paper summarizes the findings of model deployment across various computing devices in inferencing deep learning technology with FPGA. The study includes model performance evaluation, throughput, and latency comparison with different batch sizes to the extent of expected delay for real-world deployment. The result concludes that FPGA is the most suitable to act as a deep learning inference accelerator with a high throughput-to-latency ratio and fast parallel inference. The FPGA inferencing demonstrated an increment of 21.8% in throughput while maintaining a 31% lower latency than GPU inferencing and 6x more energy efficiency. The proposed inferencing also delivered over 90% accuracy and selectivity to detect and localize the TB.

Downloads

Download data is not yet available.

Author Biographies

Fazrul Faiz Zakaria, Micro System Technology, Centre of Excellence (CoE), Universiti Malaysia Perlis (UniMAP), Perlis, Malaysia

ffaiz@unimap.edu.my

Asral Bahari Jambek, Micro System Technology, Centre of Excellence (CoE), Universiti Malaysia Perlis (UniMAP), Perlis, Malaysia

asral@unimap.edu.my

Norfadila Mahrom, Sports Engineering Research Centre, Centre of Excellence (SERC), Universiti Malaysia Perlis, Perlis, Malaysia

norfadila@unimap.edu.my

Rafikha Aliana A Raof, Sports Engineering Research Centre, Centre of Excellence (SERC), Universiti Malaysia Perlis, Perlis, Malaysia

rafikha@unimap.edu.my

Mohd Nazri Mohd Warip, Advanced Computing, Centre of Excellence, Universiti Malaysia Perlis (UniMAP), Perlis, Malaysia

nazriwarip@unimap.edu.my

Phak Len Al Eh Kan, Advanced Computing, Centre of Excellence, Universiti Malaysia Perlis (UniMAP), Perlis, Malaysia

phaklen@unimap.edu.my

Muslim Mustapa, Programmable Solutions Group, Intel Malaysia

muslim.mustapa@intel.com

Published

2023-02-09

Issue

Section

Articles

Most read articles by the same author(s)