Study of VGG-19 depth in transfer learning for COVID-19 X-Ray image classification

Modern-era largely depends on Deep Learning (DL) in a lot of applications. Medical Images Diagnosis is one of the important fields nowadays because it is related to human life. But this DL requires large datasets as well as powerful computing resources. At the beginning of 2020, the world faced a ne...

Full description

Saved in:
Bibliographic Details
Main Authors: Hamad, Qusay Shihab, Samma, Hussein, Suandi, Shahrel Azmin, Mohamad Saleh, Junita
Format: Book Section
Published: Springer Science and Business Media Deutschland GmbH 2022
Subjects:
Online Access:http://eprints.utm.my/id/eprint/100575/
http://dx.doi.org/10.1007/978-981-16-8129-5_142
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Modern-era largely depends on Deep Learning (DL) in a lot of applications. Medical Images Diagnosis is one of the important fields nowadays because it is related to human life. But this DL requires large datasets as well as powerful computing resources. At the beginning of 2020, the world faced a new pandemic called COVID-19. Since it is new, shortage of reliable datasets of a running pandemic is a common phenomenon. One of the best solutions to mitigate this shortage is taking advantage of Deep Transfer Learning (DTL). DTL would be useful because it learns from one task and could work on another task with a smaller amount of dataset. This paper aims to examine the application of the transferred VGG-19 to solve the problem of COVID-19 detection from a chest x-ray. Different scenarios of the VGG-19 have been examined, including shallow model, medium model, and deep model. The main advantages of this work are two folds: COVID-19 patient can be detected with a small number of data sets, and the complexity of VGG-19 can be reduced by reducing the number of layers, which consequently reduces the training time. To assess the performance of these architectures, 2159 chest x-ray images were employed. Reported results indicated that the best recognition rate was achieved from a shallow model with 95% accuracy while the medium model and deep model obtained 94% and 75%, respectively.