Development of audio-visual speech recognition using deep-learning technique
Deep learning is a technique with artificial intelligent (AI) that simulate humans’ learning behavior. Audio-visual speech recognition is important for the listener understand the emotions behind the spoken words truly. In this thesis, two different deep learning models, Convolutional Neural Network...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Penerbit UMP
2022
|
Subjects: | |
Online Access: | http://umpir.ump.edu.my/id/eprint/37244/1/Development%20of%20audio%20visual%20speech%20recognition.pdf http://umpir.ump.edu.my/id/eprint/37244/ https://doi.org/10.15282/mekatronika.v4i1.8625 https://doi.org/10.15282/mekatronika.v4i1.8625 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep learning is a technique with artificial intelligent (AI) that simulate humans’ learning behavior. Audio-visual speech recognition is important for the listener understand the emotions behind the spoken words truly. In this thesis, two different deep learning models, Convolutional Neural Network (CNN) and Deep Neural Network (DNN), were developed to recognize the speech’s emotion from the dataset. Pytorch framework with torchaudio library was used. Both models were given the same training, validation, testing, and augmented datasets. The training will be stopped when the training loop reaches ten epochs, or the validation loss function does not improve for five epochs. At the end, the highest accuracy and lowest loss function of CNN model in the training dataset are 76.50% and 0.006029 respectively, meanwhile the DNN model achieved 75.42% and 0.086643 respectively. Both models were evaluated using confusion matrix. In conclusion, CNN model has higher performance than DNN model, but needs to improvise as the accuracy of testing dataset is low and the loss function is high. |
---|