Features extraction for speech emotion
In this paper the speech emotion verification using two most popular methods in speech processing and analysis based on the Mel-Frequency Cepstral Coefficient (MFCC) and the Gaussian Mixture Model (GMM) were proposed and analyzed. In both cases, features for the speech emotion were extracted using t...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
2009
|
Subjects: | |
Online Access: | http://irep.iium.edu.my/9565/1/Features_Extraction.pdf http://irep.iium.edu.my/9565/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this paper the speech emotion verification using two most popular methods in speech processing and analysis based on the Mel-Frequency Cepstral Coefficient (MFCC) and the Gaussian Mixture Model (GMM) were proposed and analyzed. In both cases, features for the speech emotion were extracted using the Short Time Fourier Transform (STFT) and Short Time Histogram (STH) for MFCC and GMM respectively. The performance of the speech emotion verification is measured based on three neural network (NN) and fuzzy neural network (FNN) architectures; namely: Multi Layer Perceptron (MLP), Adaptive Neuro Fuzzy Inference System (ANFIS) and Generic Self-organizing Fuzzy Neural Network (GenSoFNN). Results obtained from the experiments using real audio clips from movies and television sitcoms show the potential of using the proposed features extraction methods for real time application due to its reasonable accuracy and fast training time. This may lead us to the practical usage if the emotion verifier can be embedded in real time applications especially for personal digital assistance (PDA) or smart-phones.
|
---|