Deep learning-based classification of multichannel bio-signals for emotion recognition

Emotion recognition is a critical component in advancing applications such as human-computer interaction and mental health diagnostics. While traditional methods often rely on external cues, physiological bio-signals offer a more objective measure of an individual's internal emotional state. Th...

Full description

Saved in:
Bibliographic Details
Main Author: Ng, Wei Hong
Format: Final Year Project / Dissertation / Thesis
Published: 2025
Subjects:
Online Access:http://eprints.utar.edu.my/7213/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1854094489529876480
author Ng, Wei Hong
author_facet Ng, Wei Hong
author_sort Ng, Wei Hong
building UTAR Library
collection Institutional Repository
content_provider Universiti Tunku Abdul Rahman
content_source UTAR Institutional Repository
continent Asia
country Malaysia
description Emotion recognition is a critical component in advancing applications such as human-computer interaction and mental health diagnostics. While traditional methods often rely on external cues, physiological bio-signals offer a more objective measure of an individual's internal emotional state. This project presents the design, implementation, and comprehensive evaluation of a deep learning-based framework for multimodal emotion recognition, leveraging electroencephalography (EEG), galvanic skin response (GSR), electromyography (EMG), and speech audio. The research utilized the DEAP and RAVDESS datasets to conduct a comparative analysis of different modeling approaches. Hybrid deep learning architectures, including Convolutional Neural Networks combined with Long Short-Term Memory (CNN+LSTM) and Self-Attention mechanisms, were implemented to capture spatio-temporal patterns from EEG. These were systematically compared against a benchmark model using traditional, handcrafted features (EEG Band Power, GSR/EMG statistics). To integrate information from disparate sources, both early fusion (for homogeneous physiological signals) and a novel late fusion prototype (for heterogeneous, cross-dataset signals) were developed and evaluated. The experimental results revealed several key findings. In rigorous cross-subject validation, the traditional feature-based benchmark model demonstrated superior generalization capabilities compared to the end-to-end deep learning models, which struggled with overfitting. Concurrently, a standalone CNN model proved highly effective for classifying arousal from speech. The final late fusion prototype successfully demonstrated the ability to integrate the independently trained physiological and audio "expert" models, effectively arbitrating conflicting evidence and showcasing a viable strategy for building robust, cross-dataset multimodal systems. This project contributes a comprehensive analysis of the challenges of subject-independent classification and delivers a functional proof-of-concept for heterogeneous multimodal fusion.
format Final Year Project / Dissertation / Thesis
id my-utar-eprints.7213
institution Universiti Tunku Abdul Rahman
publishDate 2025
record_format eprints
spelling my-utar-eprints.72132025-12-29T08:02:19Z Deep learning-based classification of multichannel bio-signals for emotion recognition Ng, Wei Hong T Technology (General) Emotion recognition is a critical component in advancing applications such as human-computer interaction and mental health diagnostics. While traditional methods often rely on external cues, physiological bio-signals offer a more objective measure of an individual's internal emotional state. This project presents the design, implementation, and comprehensive evaluation of a deep learning-based framework for multimodal emotion recognition, leveraging electroencephalography (EEG), galvanic skin response (GSR), electromyography (EMG), and speech audio. The research utilized the DEAP and RAVDESS datasets to conduct a comparative analysis of different modeling approaches. Hybrid deep learning architectures, including Convolutional Neural Networks combined with Long Short-Term Memory (CNN+LSTM) and Self-Attention mechanisms, were implemented to capture spatio-temporal patterns from EEG. These were systematically compared against a benchmark model using traditional, handcrafted features (EEG Band Power, GSR/EMG statistics). To integrate information from disparate sources, both early fusion (for homogeneous physiological signals) and a novel late fusion prototype (for heterogeneous, cross-dataset signals) were developed and evaluated. The experimental results revealed several key findings. In rigorous cross-subject validation, the traditional feature-based benchmark model demonstrated superior generalization capabilities compared to the end-to-end deep learning models, which struggled with overfitting. Concurrently, a standalone CNN model proved highly effective for classifying arousal from speech. The final late fusion prototype successfully demonstrated the ability to integrate the independently trained physiological and audio "expert" models, effectively arbitrating conflicting evidence and showcasing a viable strategy for building robust, cross-dataset multimodal systems. This project contributes a comprehensive analysis of the challenges of subject-independent classification and delivers a functional proof-of-concept for heterogeneous multimodal fusion. 2025-06 Final Year Project / Dissertation / Thesis NonPeerReviewed Ng, Wei Hong (2025) Deep learning-based classification of multichannel bio-signals for emotion recognition. Final Year Project, UTAR. http://eprints.utar.edu.my/7213/
spellingShingle T Technology (General)
Ng, Wei Hong
Deep learning-based classification of multichannel bio-signals for emotion recognition
title Deep learning-based classification of multichannel bio-signals for emotion recognition
title_full Deep learning-based classification of multichannel bio-signals for emotion recognition
title_fullStr Deep learning-based classification of multichannel bio-signals for emotion recognition
title_full_unstemmed Deep learning-based classification of multichannel bio-signals for emotion recognition
title_short Deep learning-based classification of multichannel bio-signals for emotion recognition
title_sort deep learning-based classification of multichannel bio-signals for emotion recognition
topic T Technology (General)
url http://eprints.utar.edu.my/7213/
url_provider http://eprints.utar.edu.my