A lightweight neural attention-based model for service chatbots

The growing demand for efficient service chatbots has led to the development of various deep learning techniques, such as generative neural attention-based mechanisms. However, existing attention processes often face challenges in generating contextually relevant responses. This study introduces a l...

Full description

Saved in:
Bibliographic Details
Main Authors: Sinarwati, Mohamad Suhaili, Mohamad Nazim, Jambli
Format: Article
Language:en
Published: Springer Nature Limited 2025
Subjects:
Online Access:http://ir.unimas.my/id/eprint/49728/1/A%20lightweight%20neural%20attention-based%20model%20for%20service%20chatbots.pdf
http://ir.unimas.my/id/eprint/49728/
https://www.nature.com/articles/s41598-025-14215-5
https://doi.org/10.1038/s41598-025-14215-5
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The growing demand for efficient service chatbots has led to the development of various deep learning techniques, such as generative neural attention-based mechanisms. However, existing attention processes often face challenges in generating contextually relevant responses. This study introduces a lightweight neural attention mechanism designed to enhance the scalability of service chatbots by integrating a scalar function into the existing attention score computation. While inspired by scaling practices in transformer models, the proposed scalar is tailored to seq2seq architectures to optimize the alignment sequences, resulting in improved context relevance and reduced resource requirements. To validate its effectiveness, the proposed model was evaluated on a real-world Customer Support Twitter dataset. Experimental results demonstrate a +0.82 BLEU-4 improvement and a 28% reduction in training time per epoch over the baseline. Moreover, the model achieves the target validation loss two epochs earlier, indicating faster convergence and improved training stability. Further experiments investigated activation functions and weight initializers integrated into the proposed model to identify optimal configurations that optimize the model’s performance. Comparative experimental results show that the proposed modifications significantly enhance response accuracy and contextual relevance. This lightweight attention mechanism addresses key limitations of existing attention mechanisms. Future work may extend this approach by combining it with transformer-based architectures to support broader sequence prediction tasks, including machine translation, recommender systems, and image captioning.