A Deep Learning Method Using Gender-Specific Features for Emotion Recognition
Speech reflects people’s mental state and using a microphone sensor is a potential method for human–computer interaction. Speech recognition using this sensor is conducive to the diagnosis of mental illnesses. The gender difference of speakers affects the process of speech emotion recognition based...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English English |
Published: |
Molecular Diversity Preservation International (MDPI)
2023
|
Subjects: | |
Online Access: | https://eprints.ums.edu.my/id/eprint/36091/1/ABSTRACT.pdf https://eprints.ums.edu.my/id/eprint/36091/2/FULL%20TEXT.pdf https://eprints.ums.edu.my/id/eprint/36091/ https://doi.org/10.3390/s23031355 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.ums.eprints.36091 |
---|---|
record_format |
eprints |
spelling |
my.ums.eprints.360912023-07-21T06:51:54Z https://eprints.ums.edu.my/id/eprint/36091/ A Deep Learning Method Using Gender-Specific Features for Emotion Recognition Li-Min Zhang Yang Li Yue-Ting Zhang Giap Weng Ng Yu-Beng Leau Hao Yan PN4775-4784 Technique. Practical journalism QA75.5-76.95 Electronic computers. Computer science Speech reflects people’s mental state and using a microphone sensor is a potential method for human–computer interaction. Speech recognition using this sensor is conducive to the diagnosis of mental illnesses. The gender difference of speakers affects the process of speech emotion recognition based on specific acoustic features, resulting in the decline of emotion recognition accuracy. Therefore, we believe that the accuracy of speech emotion recognition can be effectively improved by selecting different features of speech for emotion recognition based on the speech representations of different genders. In this paper, we propose a speech emotion recognition method based on gender classification. First, we use MLP to classify the original speech by gender. Second, based on the different acoustic features of male and female speech, we analyze the influence weights of multiple speech emotion features in male and female speech, and establish the optimal feature sets for male and female emotion recognition, respectively. Finally, we train and test CNN and BiLSTM, respectively, by using the male and the female speech emotion feature sets. The results show that the proposed emotion recognition models have an advantage in terms of average recognition accuracy compared with gender-mixed recognition models Molecular Diversity Preservation International (MDPI) 2023 Article NonPeerReviewed text en https://eprints.ums.edu.my/id/eprint/36091/1/ABSTRACT.pdf text en https://eprints.ums.edu.my/id/eprint/36091/2/FULL%20TEXT.pdf Li-Min Zhang and Yang Li and Yue-Ting Zhang and Giap Weng Ng and Yu-Beng Leau and Hao Yan (2023) A Deep Learning Method Using Gender-Specific Features for Emotion Recognition. Sensors, 23. pp. 1-15. https://doi.org/10.3390/s23031355 |
institution |
Universiti Malaysia Sabah |
building |
UMS Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Malaysia Sabah |
content_source |
UMS Institutional Repository |
url_provider |
http://eprints.ums.edu.my/ |
language |
English English |
topic |
PN4775-4784 Technique. Practical journalism QA75.5-76.95 Electronic computers. Computer science |
spellingShingle |
PN4775-4784 Technique. Practical journalism QA75.5-76.95 Electronic computers. Computer science Li-Min Zhang Yang Li Yue-Ting Zhang Giap Weng Ng Yu-Beng Leau Hao Yan A Deep Learning Method Using Gender-Specific Features for Emotion Recognition |
description |
Speech reflects people’s mental state and using a microphone sensor is a potential method for human–computer interaction. Speech recognition using this sensor is conducive to the diagnosis of mental illnesses. The gender difference of speakers affects the process of speech emotion recognition based on specific acoustic features, resulting in the decline of emotion recognition accuracy. Therefore, we believe that the accuracy of speech emotion recognition can be effectively improved by selecting different features of speech for emotion recognition based on the speech representations of different genders. In this paper, we propose a speech emotion recognition method based on gender classification. First, we use MLP to classify the original speech by gender. Second, based on the different acoustic features of male and female speech, we analyze the influence weights of multiple speech emotion features in male and female speech, and establish the optimal feature sets for male and female emotion recognition, respectively. Finally, we train and test CNN and BiLSTM, respectively, by using the male and the female speech emotion feature sets. The results show that the proposed emotion recognition models have an advantage in terms of average recognition accuracy compared with gender-mixed recognition models |
format |
Article |
author |
Li-Min Zhang Yang Li Yue-Ting Zhang Giap Weng Ng Yu-Beng Leau Hao Yan |
author_facet |
Li-Min Zhang Yang Li Yue-Ting Zhang Giap Weng Ng Yu-Beng Leau Hao Yan |
author_sort |
Li-Min Zhang |
title |
A Deep Learning Method Using Gender-Specific Features for Emotion Recognition |
title_short |
A Deep Learning Method Using Gender-Specific Features for Emotion Recognition |
title_full |
A Deep Learning Method Using Gender-Specific Features for Emotion Recognition |
title_fullStr |
A Deep Learning Method Using Gender-Specific Features for Emotion Recognition |
title_full_unstemmed |
A Deep Learning Method Using Gender-Specific Features for Emotion Recognition |
title_sort |
deep learning method using gender-specific features for emotion recognition |
publisher |
Molecular Diversity Preservation International (MDPI) |
publishDate |
2023 |
url |
https://eprints.ums.edu.my/id/eprint/36091/1/ABSTRACT.pdf https://eprints.ums.edu.my/id/eprint/36091/2/FULL%20TEXT.pdf https://eprints.ums.edu.my/id/eprint/36091/ https://doi.org/10.3390/s23031355 |
_version_ |
1772812711185350656 |
score |
13.211869 |