Interpretable arrhythmia classification using a convolutional neural network and the LIME technique

Deep learning models have demonstrated strong performance in electrocardiogram (ECG) arrhythmia classification. However, their lack of interpretability limits clinical trust and adoption. By adopting an explainable artificial intelligence (XAI) technique, this study aims to enhance the interpretabil...

Full description

Saved in:
Bibliographic Details
Main Authors: Mohd Khairuddin, Adam, Mohd Aris, Siti Armiza, Azizan, Azizul, Zakaria, Noor Jannah
Format: Article
Language:en
Published: Universiti Teknologi MARA, Perak 2025
Subjects:
Online Access:https://ir.uitm.edu.my/id/eprint/128980/1/128980.pdf
https://doi.org/10.24191/mij.v6i2.9317
https://ir.uitm.edu.my/id/eprint/128980/
https://mijuitm.com.my/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Deep learning models have demonstrated strong performance in electrocardiogram (ECG) arrhythmia classification. However, their lack of interpretability limits clinical trust and adoption. By adopting an explainable artificial intelligence (XAI) technique, this study aims to enhance the interpretability of a convolutional neural network (CNN) model. More specifically, the Local Interpretable Model-Agnostic Explanations (LIME) technique is utilized to interpret the CNN model used to classify 17 classes of ECG arrhythmias. The CNN model was developed using a five-stage framework. The study uses the MIT-BIH Arrhythmia database to evaluate the performance of the CNN model. Results indicate that the model was able to accomplish precision of 97.00%, recall of 97.00%, F1-score of 97.00%, and overall accuracy of 99.00%. In addition, the LIME technique provides local explanations that help in the understanding of the decision-making process of the CNN model in classifying the 17 classes of ECG arrhythmias.