Classification of Grasp-and-Lift EEG using GoogLeNet

Grasp-and-Lift (GAL) action is the hand movement of lifting an object for a few seconds and action complete with putting the object back to its original position. In fact, EEG signal is one of the common ways to understand the relationship between brain and GAL action. As the relationship between...

Full description

Saved in:
Bibliographic Details
Main Author: Ong , Zhong Yi
Format: Final Year Project
Language:English
Published: IRC 2019
Online Access:http://utpedia.utp.edu.my/20200/1/Final%20Dissertation.pdf
http://utpedia.utp.edu.my/20200/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Grasp-and-Lift (GAL) action is the hand movement of lifting an object for a few seconds and action complete with putting the object back to its original position. In fact, EEG signal is one of the common ways to understand the relationship between brain and GAL action. As the relationship between human brain activity and EEG signal is yet to be fully understood, more research work needs to be carried out. Realizing the lack of low cost and practical prosthetic device for patients suffering from neurological disease and the fact that low classification accuracy due to numerous events and low Signal to Noise ratio (SNR), GAL EEG signal processing will be giving huge impact to the development of prosthetic device by providing input to Brain-Computer Interface device. As such, this research presents a Convolutional Neural Network (CNN)-based deep learning method to classify EEG signals into 6 GAL classes. The main objective of this research is to develop EEG GAL events classification based on pretrained CNN followed by performance evaluation in term of accuracy, sensitivity and specificity. 6 electrodes corresponding to motor movement including electrode C3, CZ, C4, P3, PZ, P4 were selected during pre- processing phase. One-versus-rest scheme and two class Common Spatial Pattern (CSP) filter were used to maximize variance difference between two classes. Extracted CSP features from each electrode were converted into grayscale scalogram using sliding window method followed by concatenating 3 grayscale scalogram forming RGB scalogram. One classifier was trained per class. The classification accuracy can be computed by inputting test data into trained network. Based on result obtained, average testing accuracy, specificity and sensitivity among 6 classes are 93.85%,96.5% and 91% respectively.