Extreme action recognition from real-time video using time-series deep learning model

The development of an extreme action recognition model to automate police surveillance can improve police deployment speed to crime scenes such as assault, robbery, kidnapping and other offences. However, the existing solution of extreme action recognition is insufficient to be deployed with high co...

Full description

Saved in:
Bibliographic Details
Main Author: Goh, Qing Hao
Format: Final Year Project / Dissertation / Thesis
Published: 2021
Subjects:
Online Access:http://eprints.utar.edu.my/5384/1/1700099_Final_%2D_QING_HAO_GOH.pdf
http://eprints.utar.edu.my/5384/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1833428845640810496
author Goh, Qing Hao
author_facet Goh, Qing Hao
author_sort Goh, Qing Hao
building UTAR Library
collection Institutional Repository
content_provider Universiti Tunku Abdul Rahman
content_source UTAR Institutional Repository
continent Asia
country Malaysia
description The development of an extreme action recognition model to automate police surveillance can improve police deployment speed to crime scenes such as assault, robbery, kidnapping and other offences. However, the existing solution of extreme action recognition is insufficient to be deployed with high confidence. This study proposed a time-series deep learning model to perform extreme action recognition, built with an efficient dual streams Convolutional Neural Network integrating with Convolutional Long-Short Term Memory. Notably, a novel attempt to employ background-subtracted pose keypoints as input for the recognition. Furthermore, the proposed method demonstrated improved background noise resistance when tested in the datasets of Hockey, Movies, Violent-Flow, and RWF-2000. As a result, the ablation study shows that complementing the RGB frame difference with pose keypoints will improve the framework's accuracy. The performance of the proposed framework is comparable to the existing state-of-the-arts on the RWF-2000 dataset at 87.00% accuracy, 100% accuracy on the Movie dataset, 97.00% accuracy on the Hockey dataset, and Violent-Flows dataset at 92% accuracy. The findings discovered in this study hold enormous potential to advance the current framework of extreme action recognition.
format Final Year Project / Dissertation / Thesis
id my-utar-eprints.5384
institution Universiti Tunku Abdul Rahman
publishDate 2021
record_format eprints
spelling my-utar-eprints.53842023-06-16T13:40:02Z Extreme action recognition from real-time video using time-series deep learning model Goh, Qing Hao TJ Mechanical engineering and machinery The development of an extreme action recognition model to automate police surveillance can improve police deployment speed to crime scenes such as assault, robbery, kidnapping and other offences. However, the existing solution of extreme action recognition is insufficient to be deployed with high confidence. This study proposed a time-series deep learning model to perform extreme action recognition, built with an efficient dual streams Convolutional Neural Network integrating with Convolutional Long-Short Term Memory. Notably, a novel attempt to employ background-subtracted pose keypoints as input for the recognition. Furthermore, the proposed method demonstrated improved background noise resistance when tested in the datasets of Hockey, Movies, Violent-Flow, and RWF-2000. As a result, the ablation study shows that complementing the RGB frame difference with pose keypoints will improve the framework's accuracy. The performance of the proposed framework is comparable to the existing state-of-the-arts on the RWF-2000 dataset at 87.00% accuracy, 100% accuracy on the Movie dataset, 97.00% accuracy on the Hockey dataset, and Violent-Flows dataset at 92% accuracy. The findings discovered in this study hold enormous potential to advance the current framework of extreme action recognition. 2021 Final Year Project / Dissertation / Thesis NonPeerReviewed application/pdf http://eprints.utar.edu.my/5384/1/1700099_Final_%2D_QING_HAO_GOH.pdf Goh, Qing Hao (2021) Extreme action recognition from real-time video using time-series deep learning model. Final Year Project, UTAR. http://eprints.utar.edu.my/5384/
spellingShingle TJ Mechanical engineering and machinery
Goh, Qing Hao
Extreme action recognition from real-time video using time-series deep learning model
title Extreme action recognition from real-time video using time-series deep learning model
title_full Extreme action recognition from real-time video using time-series deep learning model
title_fullStr Extreme action recognition from real-time video using time-series deep learning model
title_full_unstemmed Extreme action recognition from real-time video using time-series deep learning model
title_short Extreme action recognition from real-time video using time-series deep learning model
title_sort extreme action recognition from real-time video using time-series deep learning model
topic TJ Mechanical engineering and machinery
url http://eprints.utar.edu.my/5384/1/1700099_Final_%2D_QING_HAO_GOH.pdf
http://eprints.utar.edu.my/5384/
url_provider http://eprints.utar.edu.my