Detection of Yawning and Eye Closure for Monitoring Driver’s Drowsiness

Over the years, traffic accidents related to drowsiness has been steadily rising. There are experimental methods that relies on brain activities to determine if the driver is drowsy using special equipment such like ECG or EEG which requires extensive wiring and special equipment. Moreover, driver...

Full description

Saved in:
Bibliographic Details
Main Author: Qan, Khai Mun
Format: Final Year Project Report / IMRAD
Language:en
en
Published: Universiti Malaysia Sarawak (UNIMAS) 2020
Subjects:
Online Access:http://ir.unimas.my/id/eprint/34575/1/Qan%20Khai%20Mun%20-%2024%20pgs.pdf
http://ir.unimas.my/id/eprint/34575/6/Qan%20Khai%20Mun.pdf
http://ir.unimas.my/id/eprint/34575/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Over the years, traffic accidents related to drowsiness has been steadily rising. There are experimental methods that relies on brain activities to determine if the driver is drowsy using special equipment such like ECG or EEG which requires extensive wiring and special equipment. Moreover, driver assisting visual display is becoming a norm in most modern vehicles where assistive information is provided to aid the driver in decision making. However, such design imposes a distraction for the driver as too much information may induce confusion to the driver when making decisions especially during emergencies. This project proposes a framework of using only mobile device and cloud services to detect drowsiness in driver and alerting them through audio feedbacks. By utilising cloud services, computational expenses for executing the image processing algorithms on the mobile device can be minimised. At the end of this project, a mobile application prototype is implemented together with the results of experiment as a proof of concept to the proposed framework. The results of classification accuracy and facial detection ability are recorded and tabled. Based on the results, the proposed framework yield 82.22% and 96% for classification accuracy and facial detection, respectively, with appropriate audio feedback mechanism in place. Project conclusion with limitations and future work are discussed at the end of this research.