Classification of facial part movement acquired from Kinect V1 and Kinect V2
The aim of this study is to determine the motion sensor with better performance in facial part movements recognition among Kinect v1 and Kinect v2. This study has applied some classification methods such as neural network, complex decision tree, cubic SVM, fine Gaussian SVM, fine kNN and QDA in the...
Saved in:
Main Authors: | , , , , , |
---|---|
Format: | Conference or Workshop Item |
Language: | English English |
Published: |
Springer
2021
|
Subjects: | |
Online Access: | http://umpir.ump.edu.my/id/eprint/33563/1/Classification%20of%20facial%20part%20movement%20acquired%20from%20Kinect%20V1%20.pdf http://umpir.ump.edu.my/id/eprint/33563/2/Classification%20of%20facial%20part%20movement%20acquired%20from%20Kinect%20V1_FULL.pdf http://umpir.ump.edu.my/id/eprint/33563/ https://doi.org/10.1007/978-981-15-5281-6_65 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The aim of this study is to determine the motion sensor with better performance in facial part movements recognition among Kinect v1 and Kinect v2. This study has applied some classification methods such as neural network, complex decision tree, cubic SVM, fine Gaussian SVM, fine kNN and QDA in the dataset obtained from Kinect v1 and Kinect v2. The facial part movement is detected and extracted in 11 features and 15 classes. The chosen classifications are then applied to train and test the dataset. Kinect sensor that has the dataset with highest testing accuracy will be selected to develop an assistive facial exercise application in terms of tracking performance and detection accuracy. |
---|