Optimising LSTM and BILSTM models for time series forecasting through hyperparameter tuning
Long Short-Term Memory (LSTM) and Bidirectional Long Short-Term Memory (BiLSTM) are the emerging Recurrent Neural Networks (RNN) widely used in time series forecasting. The performance of these neural networks relies on the selection of hyperparameters. A random selection of the hyperparameters may...
Saved in:
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | en |
| Published: |
Penerbit Universiti Kebangsaan Malaysia
2025
|
| Online Access: | http://journalarticle.ukm.my/26426/1/Paper_12%20-.pdf http://journalarticle.ukm.my/26426/ https://www.ukm.my/jqma/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1855615330406105088 |
|---|---|
| author | Nur Haizum Abd Rahman, Quay, Pin Yin Hani Syahida Zulkafli, |
| author_facet | Nur Haizum Abd Rahman, Quay, Pin Yin Hani Syahida Zulkafli, |
| author_sort | Nur Haizum Abd Rahman, |
| building | Tun Sri Lanang Library |
| collection | Institutional Repository |
| content_provider | Universiti Kebangsaan Malaysia |
| content_source | UKM Journal Article Repository |
| continent | Asia |
| country | Malaysia |
| description | Long Short-Term Memory (LSTM) and Bidirectional Long Short-Term Memory (BiLSTM) are the emerging Recurrent Neural Networks (RNN) widely used in time series forecasting. The performance of these neural networks relies on the selection of hyperparameters. A random selection of the hyperparameters may increase the forecasting error. Hence, this study aims to optimise the performance of LSTM and BiLSTM in time series forecasting by tuning one of the essential hyperparameters, the number of hidden neurons. LSTM and BiLSTM with 32, 64, and 128 hidden neurons and various combinations of other hyperparameters are formed in this study through grid searching. The models are evaluated and compared based on the Mean Squared Error (MSE) and Mean Absolute Error (MAE). The results from real data analysis revealed that 128 hidden neurons are the optimum choice of hidden neurons with the lowest error values. This study investigates whether BiLSTM, performs better than LSTM in forecasting. Thus, the performance of these two neural networks in forecasting time series data was compared, and the Wilcoxon-Signed Rank Test was conducted. Results revealed a significant difference in the performance of these two neural networks, and BiLSTM outperformed LSTM in forecasting time series data. Hence, BiLSTM with 128 hidden neurons is encouraged to be chosen over LSTM in time series forecasting. Since these findings have implications for future practice, the combination of model and hyperparameter should be chosen wisely to obtain more accurate predictions in time series forecasting. |
| format | Article |
| id | my-ukm.journal.26426 |
| institution | Universiti Kebangsaan Malaysia |
| language | en |
| publishDate | 2025 |
| publisher | Penerbit Universiti Kebangsaan Malaysia |
| record_format | eprints |
| spelling | my-ukm.journal.264262026-01-12T09:11:07Z http://journalarticle.ukm.my/26426/ Optimising LSTM and BILSTM models for time series forecasting through hyperparameter tuning Nur Haizum Abd Rahman, Quay, Pin Yin Hani Syahida Zulkafli, Long Short-Term Memory (LSTM) and Bidirectional Long Short-Term Memory (BiLSTM) are the emerging Recurrent Neural Networks (RNN) widely used in time series forecasting. The performance of these neural networks relies on the selection of hyperparameters. A random selection of the hyperparameters may increase the forecasting error. Hence, this study aims to optimise the performance of LSTM and BiLSTM in time series forecasting by tuning one of the essential hyperparameters, the number of hidden neurons. LSTM and BiLSTM with 32, 64, and 128 hidden neurons and various combinations of other hyperparameters are formed in this study through grid searching. The models are evaluated and compared based on the Mean Squared Error (MSE) and Mean Absolute Error (MAE). The results from real data analysis revealed that 128 hidden neurons are the optimum choice of hidden neurons with the lowest error values. This study investigates whether BiLSTM, performs better than LSTM in forecasting. Thus, the performance of these two neural networks in forecasting time series data was compared, and the Wilcoxon-Signed Rank Test was conducted. Results revealed a significant difference in the performance of these two neural networks, and BiLSTM outperformed LSTM in forecasting time series data. Hence, BiLSTM with 128 hidden neurons is encouraged to be chosen over LSTM in time series forecasting. Since these findings have implications for future practice, the combination of model and hyperparameter should be chosen wisely to obtain more accurate predictions in time series forecasting. Penerbit Universiti Kebangsaan Malaysia 2025-09 Article PeerReviewed application/pdf en http://journalarticle.ukm.my/26426/1/Paper_12%20-.pdf Nur Haizum Abd Rahman, and Quay, Pin Yin and Hani Syahida Zulkafli, (2025) Optimising LSTM and BILSTM models for time series forecasting through hyperparameter tuning. Journal of Quality Measurement and Analysis, 21 (3). pp. 191-205. ISSN 2600-8602 https://www.ukm.my/jqma/ |
| spellingShingle | Nur Haizum Abd Rahman, Quay, Pin Yin Hani Syahida Zulkafli, Optimising LSTM and BILSTM models for time series forecasting through hyperparameter tuning |
| title | Optimising LSTM and BILSTM models for time series forecasting through hyperparameter tuning |
| title_full | Optimising LSTM and BILSTM models for time series forecasting through hyperparameter tuning |
| title_fullStr | Optimising LSTM and BILSTM models for time series forecasting through hyperparameter tuning |
| title_full_unstemmed | Optimising LSTM and BILSTM models for time series forecasting through hyperparameter tuning |
| title_short | Optimising LSTM and BILSTM models for time series forecasting through hyperparameter tuning |
| title_sort | optimising lstm and bilstm models for time series forecasting through hyperparameter tuning |
| url | http://journalarticle.ukm.my/26426/1/Paper_12%20-.pdf http://journalarticle.ukm.my/26426/ https://www.ukm.my/jqma/ |
| url_provider | http://journalarticle.ukm.my/ |
