Sign language recognition using You Only Look Once-Neural Architecture Search / Muhammad Imran Nasharudin

The American Sign Language (ASL) is a nonverbal communication language that uses visual sign patterns formed with the hands or any part of the body and is usually utilized by persons who have hearing or listening disabilities. The deaf and mute people have difficulty communicating their thoughts, ne...

Full description

Saved in:
Bibliographic Details
Main Author: Nasharudin, Muhammad Imran
Format: Thesis
Language:en
Published: 2023
Subjects:
Online Access:https://ir.uitm.edu.my/id/eprint/89043/1/89043.pdf
https://ir.uitm.edu.my/id/eprint/89043/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1833079709517217792
author Nasharudin, Muhammad Imran
author_facet Nasharudin, Muhammad Imran
author_sort Nasharudin, Muhammad Imran
building Tun Abdul Razak Library
collection Institutional Repository
content_provider Universiti Teknologi Mara
content_source UiTM Institutional Repository
continent Asia
country Malaysia
description The American Sign Language (ASL) is a nonverbal communication language that uses visual sign patterns formed with the hands or any part of the body and is usually utilized by persons who have hearing or listening disabilities. The deaf and mute people have difficulty communicating their thoughts, needs, and feelings through spoken language. There is a need to have an alternative method based on computer technology for those who are deafen or hard of hearing people since vocal communication is not available to them. One of the issues of computer-based sign language recognition is latency which creates delay in executing the interpretation of certain gestures. To balance latency vs. throughput, the architecture is discovered automatically using a Neural Architecture Search (NAS) technology called AutoNAC. The innovative features of YOLO-NAS include the quantization aware modules QSP and QCI, which combine re-parameterization for 8-bit quantization to minimize accuracy loss during post-training quantization. The architecture is designed to identify tiny objects, increase localization accuracy, and improve the performance-per-compute ratio, making it appropriate for real-time edge-device applications. As using YOLO-NAS for the sign language recognition, we succeeded to deploy average of detection percentage of 86% of all sign language alphabets. YOLO-NAS networks are successfully used in sign language recognition, with a reported 96.41 (mAP@50).
format Thesis
id my.uitm.ir-89043
institution Universiti Teknologi Mara
language en
publishDate 2023
record_format eprints
spelling my.uitm.ir-890432024-03-19T07:08:33Z https://ir.uitm.edu.my/id/eprint/89043/ Sign language recognition using You Only Look Once-Neural Architecture Search / Muhammad Imran Nasharudin Nasharudin, Muhammad Imran Applied psychology The American Sign Language (ASL) is a nonverbal communication language that uses visual sign patterns formed with the hands or any part of the body and is usually utilized by persons who have hearing or listening disabilities. The deaf and mute people have difficulty communicating their thoughts, needs, and feelings through spoken language. There is a need to have an alternative method based on computer technology for those who are deafen or hard of hearing people since vocal communication is not available to them. One of the issues of computer-based sign language recognition is latency which creates delay in executing the interpretation of certain gestures. To balance latency vs. throughput, the architecture is discovered automatically using a Neural Architecture Search (NAS) technology called AutoNAC. The innovative features of YOLO-NAS include the quantization aware modules QSP and QCI, which combine re-parameterization for 8-bit quantization to minimize accuracy loss during post-training quantization. The architecture is designed to identify tiny objects, increase localization accuracy, and improve the performance-per-compute ratio, making it appropriate for real-time edge-device applications. As using YOLO-NAS for the sign language recognition, we succeeded to deploy average of detection percentage of 86% of all sign language alphabets. YOLO-NAS networks are successfully used in sign language recognition, with a reported 96.41 (mAP@50). 2023 Thesis NonPeerReviewed text en https://ir.uitm.edu.my/id/eprint/89043/1/89043.pdf Sign language recognition using You Only Look Once-Neural Architecture Search / Muhammad Imran Nasharudin. (2023) Degree thesis, thesis, Universiti Teknologi MARA, Melaka. <http://terminalib.uitm.edu.my/89043.pdf>
spellingShingle Applied psychology
Nasharudin, Muhammad Imran
Sign language recognition using You Only Look Once-Neural Architecture Search / Muhammad Imran Nasharudin
title Sign language recognition using You Only Look Once-Neural Architecture Search / Muhammad Imran Nasharudin
title_full Sign language recognition using You Only Look Once-Neural Architecture Search / Muhammad Imran Nasharudin
title_fullStr Sign language recognition using You Only Look Once-Neural Architecture Search / Muhammad Imran Nasharudin
title_full_unstemmed Sign language recognition using You Only Look Once-Neural Architecture Search / Muhammad Imran Nasharudin
title_short Sign language recognition using You Only Look Once-Neural Architecture Search / Muhammad Imran Nasharudin
title_sort sign language recognition using you only look once-neural architecture search / muhammad imran nasharudin
topic Applied psychology
url https://ir.uitm.edu.my/id/eprint/89043/1/89043.pdf
https://ir.uitm.edu.my/id/eprint/89043/
url_provider http://ir.uitm.edu.my/