Class-based analysis of Russell’s four-quadrant emotion prediction in virtual reality using multi-layer feedforward Anns

The following research describes the potential of classifying a four class emotion using a wearable EEG headset and using VR to induce emotional responses from the users. Various researchers have conducted emotion recognition using medical-grade EEG devices supported with a 2D monitor screen to indu...

Full description

Saved in:
Bibliographic Details
Main Authors: Nazmi Sofian Suhaimi, James Mountstephens, Jason Teo
Format: Proceedings
Language:en
Published: Association for Computing Machinery 2021
Subjects:
Online Access:https://eprints.ums.edu.my/id/eprint/44934/1/FULLTEXT.pdf
https://eprints.ums.edu.my/id/eprint/44934/
https://dl.acm.org/doi/10.1145/3457784.3457809
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The following research describes the potential of classifying a four class emotion using a wearable EEG headset and using VR to induce emotional responses from the users. Various researchers have conducted emotion recognition using medical-grade EEG devices supported with a 2D monitor screen to induce emotional responses. This method of approach could cause additional artifacts due to the lack of concentration focusing within the border of the monitor screen of the intended stimulation thus reducing the classification accuracies. The large and complex EEG machine used by medical professions are sensitive equipment must be operated by trained professions thus making it difficult to seek permit to access such device. Hence, using a wearable EEG headset which is small and portable was considered for the brainwave signal samplings. this favors the researchers for use in conducting experiments for a human recognition system. The wearable EEG headset collects the brainwave signals at TP9, TP10, AF7, and AF8 electrode placements sampled at 256Hz with the five-bands (Delta, Theta, Alpha, Beta, Gamma). Additionally, the wearable EEG headset combines with the virtual reality (VR) headset to induce emotional responses presented to the users using the prepared VR video stimulus. The VR video was presented using the Arousal Valence Space (AVS) model with each of the respective quadrant having four videos presented in 80-seconds with a 10-second rest interval during transitions totaling up to 360-seconds from beginning to end. The collected samples are classified using Feedforward Artificial Neural Network (FANN) with 10-fold cross-validation and the model was trained using 90% of the total dataset with 10% used for validation purposes. The highest average classification result obtained from FANN was at 41.04%. While the classification performance was low, the confusion matrix presented a different view of the four-classes performed using different trained epoch values. Observations of trained epoch (2000, 3000, and 5000) showed each of the emotion classes happy, scared, bored, and calm, achieved classification accuracy of 75.15%, 75.12%, 75.02%, and 74.24% respectively