Machine learning on electrophysiological encoding deficits in hearing speech perception / Abdul Rauf Abu Bakar
Hearing functionality is a crucial necessity for individuals to engage in social communication and interaction with others. Sensorineural hearing loss (SNHL), also known as auditory neuronal abnormalities, occurs when there is any variation from the normal structure and physiological function of the...
Saved in:
Main Author: | |
---|---|
Format: | Thesis |
Published: |
2024
|
Subjects: | |
Online Access: | http://studentsrepo.um.edu.my/15360/2/Abdul_Rauf.pdf http://studentsrepo.um.edu.my/15360/1/Abdul_Rauf.pdf http://studentsrepo.um.edu.my/15360/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.um.stud.15360 |
---|---|
record_format |
eprints |
institution |
Universiti Malaya |
building |
UM Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Malaya |
content_source |
UM Student Repository |
url_provider |
http://studentsrepo.um.edu.my/ |
topic |
TK Electrical engineering. Electronics Nuclear engineering |
spellingShingle |
TK Electrical engineering. Electronics Nuclear engineering Abdul Rauf , Abu Bakar Machine learning on electrophysiological encoding deficits in hearing speech perception / Abdul Rauf Abu Bakar |
description |
Hearing functionality is a crucial necessity for individuals to engage in social communication and interaction with others. Sensorineural hearing loss (SNHL), also known as auditory neuronal abnormalities, occurs when there is any variation from the normal structure and physiological function of the inner auditory system. To objectively evaluate electroencephalography (EEG) signals, the researchers concentrated on exploiting these electrophysiological behavioural responses from the brain's evoked activity, known as cortical auditory evoked potential (CAEP), to investigate the processes of speech identification and categorization of perceived auditory inputs due to the reported decline in speech perception among people with disabilities. The behavioural of CAEP responses in the time-frequency domain would leverage interaction between cognitive processes and auditory perception, including malfunction situations. With the emergence of machine learning techniques in the healthcare sector, promising features through data-driven architecture hold potential for accurate prediction of current data and future outcomes besides eliminating the need for conventionally trained healthcare professionals to intervene in clinical settings. This study aims to develop robust machine learning classifiers using brain-evoked activity at multimodal perceived auditory stimulus for auditory assessment. The processing of encoding deficits in people with hearing loss through the relationship of auditory stimulus, electrophysiological response, and spectral modality from multimodal auditory representations were addressed for discriminability prior to the learning stage by classifiers. The behavioural analysis from two separated source data (electrode-based data responses and wholedata-based responses) were submitted as features in developing five models systematically using Support Vector Machine (SVM), K-Nearest Neighbours (KNN), Decision Tree (DT), Naïve Bayes (NB) and Linear Discriminant Analysis (LDA) with two validation strategies, i.e. K-Folds Cross-Validation (KFCV) and leave-one-participant-out cross-validation (LOPOCV). Subsequently, testing data of interest applied to evaluate the formulated performance metrics throughout the classification task between normal and disability at distinctive auditory stimulus. The results showed that both data-based responses produced satisfactory to excellent classification performance with accuracy ranging from 57% to 100% (averaged at 88.73%) and from 54% to 99.8% (averaged at 83.31%) respectively. It is indicated that KNN and SVM classifiers presented their robustness when achieving outstanding classification performance of higher than 97% and 93% respectively across all conditions, compared to other alternatives. DT would be the next best-performing model with accuracy higher than 82%. LDA reported as the most underperforming classifier. For instant at implementation of LOPOCV technique, which excludes participants who have not been previously used in the classifier's training data, results in a fall in accuracy ranging from 0.3% to 3.0% compared to KFCV. The classification accuracy reported to show discrepancies when discriminating voicing contrast stimulus compared to other types of auditory stimulus between 0.5% to 14%. This scenario underscores the remarkable capability of machine learning classification to effectively differentiate difficult stimulus presented at elevated levels of cognitive processing. Collectively, ongoing endeavours support crucial advancements in converting exceptional classification models into practical applications, using their distinct computational attributes. The proposed methodologies are crucial for clinicians when evaluating larger populations with varying degrees of hearing impairment to promptly diagnose and develop treatment plans via automatic operation.
|
format |
Thesis |
author |
Abdul Rauf , Abu Bakar |
author_facet |
Abdul Rauf , Abu Bakar |
author_sort |
Abdul Rauf , Abu Bakar |
title |
Machine learning on electrophysiological encoding deficits in hearing speech perception / Abdul Rauf Abu Bakar |
title_short |
Machine learning on electrophysiological encoding deficits in hearing speech perception / Abdul Rauf Abu Bakar |
title_full |
Machine learning on electrophysiological encoding deficits in hearing speech perception / Abdul Rauf Abu Bakar |
title_fullStr |
Machine learning on electrophysiological encoding deficits in hearing speech perception / Abdul Rauf Abu Bakar |
title_full_unstemmed |
Machine learning on electrophysiological encoding deficits in hearing speech perception / Abdul Rauf Abu Bakar |
title_sort |
machine learning on electrophysiological encoding deficits in hearing speech perception / abdul rauf abu bakar |
publishDate |
2024 |
url |
http://studentsrepo.um.edu.my/15360/2/Abdul_Rauf.pdf http://studentsrepo.um.edu.my/15360/1/Abdul_Rauf.pdf http://studentsrepo.um.edu.my/15360/ |
_version_ |
1811682649460178944 |
spelling |
my.um.stud.153602024-09-11T23:40:03Z Machine learning on electrophysiological encoding deficits in hearing speech perception / Abdul Rauf Abu Bakar Abdul Rauf , Abu Bakar TK Electrical engineering. Electronics Nuclear engineering Hearing functionality is a crucial necessity for individuals to engage in social communication and interaction with others. Sensorineural hearing loss (SNHL), also known as auditory neuronal abnormalities, occurs when there is any variation from the normal structure and physiological function of the inner auditory system. To objectively evaluate electroencephalography (EEG) signals, the researchers concentrated on exploiting these electrophysiological behavioural responses from the brain's evoked activity, known as cortical auditory evoked potential (CAEP), to investigate the processes of speech identification and categorization of perceived auditory inputs due to the reported decline in speech perception among people with disabilities. The behavioural of CAEP responses in the time-frequency domain would leverage interaction between cognitive processes and auditory perception, including malfunction situations. With the emergence of machine learning techniques in the healthcare sector, promising features through data-driven architecture hold potential for accurate prediction of current data and future outcomes besides eliminating the need for conventionally trained healthcare professionals to intervene in clinical settings. This study aims to develop robust machine learning classifiers using brain-evoked activity at multimodal perceived auditory stimulus for auditory assessment. The processing of encoding deficits in people with hearing loss through the relationship of auditory stimulus, electrophysiological response, and spectral modality from multimodal auditory representations were addressed for discriminability prior to the learning stage by classifiers. The behavioural analysis from two separated source data (electrode-based data responses and wholedata-based responses) were submitted as features in developing five models systematically using Support Vector Machine (SVM), K-Nearest Neighbours (KNN), Decision Tree (DT), Naïve Bayes (NB) and Linear Discriminant Analysis (LDA) with two validation strategies, i.e. K-Folds Cross-Validation (KFCV) and leave-one-participant-out cross-validation (LOPOCV). Subsequently, testing data of interest applied to evaluate the formulated performance metrics throughout the classification task between normal and disability at distinctive auditory stimulus. The results showed that both data-based responses produced satisfactory to excellent classification performance with accuracy ranging from 57% to 100% (averaged at 88.73%) and from 54% to 99.8% (averaged at 83.31%) respectively. It is indicated that KNN and SVM classifiers presented their robustness when achieving outstanding classification performance of higher than 97% and 93% respectively across all conditions, compared to other alternatives. DT would be the next best-performing model with accuracy higher than 82%. LDA reported as the most underperforming classifier. For instant at implementation of LOPOCV technique, which excludes participants who have not been previously used in the classifier's training data, results in a fall in accuracy ranging from 0.3% to 3.0% compared to KFCV. The classification accuracy reported to show discrepancies when discriminating voicing contrast stimulus compared to other types of auditory stimulus between 0.5% to 14%. This scenario underscores the remarkable capability of machine learning classification to effectively differentiate difficult stimulus presented at elevated levels of cognitive processing. Collectively, ongoing endeavours support crucial advancements in converting exceptional classification models into practical applications, using their distinct computational attributes. The proposed methodologies are crucial for clinicians when evaluating larger populations with varying degrees of hearing impairment to promptly diagnose and develop treatment plans via automatic operation. 2024-07 Thesis NonPeerReviewed application/pdf http://studentsrepo.um.edu.my/15360/2/Abdul_Rauf.pdf application/pdf http://studentsrepo.um.edu.my/15360/1/Abdul_Rauf.pdf Abdul Rauf , Abu Bakar (2024) Machine learning on electrophysiological encoding deficits in hearing speech perception / Abdul Rauf Abu Bakar. PhD thesis, Universiti Malaya. http://studentsrepo.um.edu.my/15360/ |
score |
13.211869 |