Deep learning-based classification of multichannel bio-signals for emotion recognition
Emotion recognition is a critical component in advancing applications such as human-computer interaction and mental health diagnostics. While traditional methods often rely on external cues, physiological bio-signals offer a more objective measure of an individual's internal emotional state. Th...
Saved in:
| Main Author: | |
|---|---|
| Format: | Final Year Project / Dissertation / Thesis |
| Published: |
2025
|
| Subjects: | |
| Online Access: | http://eprints.utar.edu.my/7213/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Emotion recognition is a critical component in advancing applications such as human-computer interaction and mental health diagnostics. While traditional methods often rely on external cues, physiological bio-signals offer a more objective measure of an individual's internal emotional state. This project presents the design, implementation, and comprehensive evaluation of a deep learning-based framework for multimodal emotion recognition, leveraging electroencephalography (EEG), galvanic skin response (GSR), electromyography (EMG), and speech audio.
The research utilized the DEAP and RAVDESS datasets to conduct a comparative analysis of different modeling approaches. Hybrid deep learning architectures, including Convolutional Neural Networks combined with Long Short-Term Memory (CNN+LSTM) and Self-Attention mechanisms, were implemented to capture spatio-temporal patterns from EEG. These were systematically compared against a benchmark model using traditional, handcrafted features (EEG Band Power, GSR/EMG statistics). To integrate information from disparate sources, both early fusion (for homogeneous physiological signals) and a novel late fusion prototype (for heterogeneous, cross-dataset signals) were developed and evaluated.
The experimental results revealed several key findings. In rigorous cross-subject validation, the traditional feature-based benchmark model demonstrated superior generalization capabilities compared to the end-to-end deep learning models, which struggled with overfitting. Concurrently, a standalone CNN model proved highly effective for classifying arousal from speech. The final late fusion prototype successfully demonstrated the ability to integrate the independently trained physiological and audio "expert" models, effectively arbitrating conflicting evidence and showcasing a viable strategy for building robust, cross-dataset multimodal systems. This project contributes a comprehensive analysis of the challenges of subject-independent classification and delivers a functional proof-of-concept for heterogeneous multimodal fusion. |
|---|
