An explainable deep learning approach for multi-class news classification

In today's information-driven world, the rapid growth of digital news content demands efficient and interpretable classification systems. This research presents using a hybrid Convolutional Neural Network and Bidirectional Long Short-Term Memory (CNN-BiLSTM) model, enhanced with SHapley Additiv...

Full description

Saved in:
Bibliographic Details
Main Authors: Tomal, Md Raihanul Islam, Kohbalan, Moorthy, Mazlina, Abdul Majid, Muhammad Akmal, Remli, Pratondo, Agus
Format: Conference or Workshop Item
Language:en
Published: IEEE 2026
Subjects:
Online Access:https://umpir.ump.edu.my/id/eprint/46036/7/An%20explainable%20deep%20learning%20approach%20for%20multiclass%20news%20classification.pdf
https://umpir.ump.edu.my/id/eprint/46036/
https://doi.org/10.1109/ICSECS65227.2025.11278958
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In today's information-driven world, the rapid growth of digital news content demands efficient and interpretable classification systems. This research presents using a hybrid Convolutional Neural Network and Bidirectional Long Short-Term Memory (CNN-BiLSTM) model, enhanced with SHapley Additive exPlanations (SHAP) for interpretability. Traditional deep learning models often focus solely on accuracy, lacking transparency in decision-making. To address this, the study aims to combine high classification performance with interpretability, ensuring that the most influential input features can be identified and explained. The model was trained and evaluated on the BBC News dataset, which includes five categories: business, entertainment, politics, sport, and tech. The proposed hybrid model achieved an overall accuracy of 98%, with class-wise performance as follows: business (96%), entertainment (95%), politics (98%), sport (99%), and tech (99%). Performance was further supported through confusion matrix and training curve analyses. SHAP analysis provided visual insights into feature contributions, highlighting the most impactful tokens for each class prediction. This integration of explainability and accuracy establishes a trustworthy framework for real-world news classification, making the system more transparent, reliable, and suitable for practical applications.