Pre-trained Convolutional Neural Network (CNN) models for COVID-19 classification using Covid-19 radiography dataset

The ongoing COVID-19 pandemic has triggered a global healthcare crisis, highlighting the urgent need for more efficient and accurate diagnostic tools. Despite the widespread use of RT-PCR as the clinical benchmark for diagnosis, its dependence on laboratory infrastructure, high operational costs, an...

Full description

Saved in:
Bibliographic Details
Main Authors: Zanariah, Zainudin, Nurul Syafidah, Jamil, Nur Amalina, Mat Jan, Noraini, Ibrahim, Tey, Chee Chieh, Liyana Adilla, Burhanuddin, Ahmad Hakimi, Ahmad Sa'ahiry
Format: Conference or Workshop Item
Language:en
Published: IEEE 2025
Subjects:
Online Access:https://umpir.ump.edu.my/id/eprint/47307/1/Pre-Trained%20Convolutional%20Neural%20Network%20CNN%20Models%20for%20COVID-19%20Classification%20Using%20Covid-19%20Radiography%20Dataset.pdf
https://umpir.ump.edu.my/id/eprint/47307/
https://doi.org/10.1109/AiDAS67696.2025.11213897
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The ongoing COVID-19 pandemic has triggered a global healthcare crisis, highlighting the urgent need for more efficient and accurate diagnostic tools. Despite the widespread use of RT-PCR as the clinical benchmark for diagnosis, its dependence on laboratory infrastructure, high operational costs, and the need for skilled personnel have posed significant challenges, particularly in resource-constrained settings. This has driven increased interest in radiology-based diagnostic approaches. This research investigates the application of pretrained Convolutional Neural Network (CNN) models for automatic COVID-19 detection using chest X-ray (CXR) images from the COVID-19 Radiography Dataset. By leveraging feature representations learned from large-scale datasets, pretrained models offer a more computationally efficient alternative to training from scratch. A comparative evaluation was conducted on four pre-trained CNN models EfficientNet, ShuffleNet, NASNet, and MobileNetV2 across three depth levels (0, 1, and 2). Experimental results reveal that NASNet at depth 1 achieved the highest overall performance, with a validation accuracy of 94.98% and an F1-score of 96.85%, outperforming all other configurations across key metrics including precision (96.23%), recall (95.87%), and AUC (96.78%). Meanwhile, ShuffleNet at depth 1 also demonstrated strong and consistent performance, achieving the highest AUC of 98.72% and maintaining a balanced trade-off between precision and recall, making it the most robust alternative. These findings indicate that moderately deep configurations particularly NASNet and ShuffleNet at depth 1 offer an optimal balance between learning capacity and generalization. Overall, this research supports the application of pre-trained CNNs as an effective and scalable solution for medical image classification, with significant potential in broader diagnostic imaging and disease detection tasks.