Signal-based feature extraction for makhraj emission point classification

Due to the similar sound of one letter to the others, mistakes might happen when pronouncing a hijaiyah letter. The reciter will not read the Quran correctly if they do not understand the relationship between the hijaiyah letter sound and its point of articulation. This study addresses the issue to...

Full description

Saved in:
Bibliographic Details
Main Authors: Nurul Wahidah, Arshad, Mohd Zamri, Ibrahim, Rohana, Abdul Karim, Yasmin, Abdul Wahab, Nor Farizan, Zakaria, Tuan Sidek, Tuan Muda
Format: Conference or Workshop Item
Language:English
English
Published: Institution of Engineering and Technology 2022
Subjects:
Online Access:http://umpir.ump.edu.my/id/eprint/41947/1/Signal-based%20feature%20extraction%20for%20makhraj%20emission.pdf
http://umpir.ump.edu.my/id/eprint/41947/2/Signal-based%20feature%20extraction%20for%20makhraj%20emission%20point%20classification_ABS.pdf
http://umpir.ump.edu.my/id/eprint/41947/
https://doi.org/10.1049/icp.2022.2562
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Due to the similar sound of one letter to the others, mistakes might happen when pronouncing a hijaiyah letter. The reciter will not read the Quran correctly if they do not understand the relationship between the hijaiyah letter sound and its point of articulation. This study addresses the issue to recognize the nine points of articulation (throat, uvular, molar, palatal, alveolar, dental, alveolar dental, lip, and interdental) from makhraj recitation using speech processing technique. As much as 181 non-distributive audio samples recorded in control environment. The input speech is a sukun combination of the Hijaiyah letter from an expert reciter. The research uses 5 type of signal-based feature extraction methods (MFCC, chroma, Mel spectrogram, spectral contract, and Tonnetz) and three type of classification methods (ANN, kNN, and SVM). The result shows the proposed method obtained a fair accuracy with the highest accuracy is 56% using ANN.