Incremental learning of deep neural network for robust vehicle classification
Existing single-lane free flow (SLFF) tolling systems either heavily rely on contact-based treadle sensor to detect the number of vehicle wheels or manual operator to classify vehicles. While the former is susceptible to high maintenance cost due to wear and tear, the latter is prone to human er...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Penerbit Universiti Kebangsaan Malaysia
2022
|
Online Access: | http://journalarticle.ukm.my/20587/1/11.pdf http://journalarticle.ukm.my/20587/ https://www.ukm.my/jkukm/volume-3405-2022/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Existing single-lane free flow (SLFF) tolling systems either heavily rely on contact-based treadle sensor to detect the number
of vehicle wheels or manual operator to classify vehicles. While the former is susceptible to high maintenance cost due to
wear and tear, the latter is prone to human error. This paper proposes a vision-based solution to SLFF vehicle classification
by adapting a state-of-the-art object detection model as a backbone of the proposed framework and an incremental training
scheme to train our VehicleDetNet in a continual manner to cater the challenging problem of continuous growing dataset
in real-world environment. It involved four experiment set-ups where the first stage involved CUTe datasets. VehicleDetNet
is utilized for the framework of vehicle detection, and it presents an anchorless network which enable the elimination of the
bounding boxes of candidates’ anchors. The classification of vehicles is performed by detecting the vehicle’s location and
inferring the vehicle’s class. We augment the model with a wheel detector and enumerator to add more robustness, showing
improved performance. The proposed method was evaluated on live dataset collected from the Gombak toll plaza at Kuala
Lumpur-Karak Expressway. The results show that within two months of observation, the mean accuracy increases from 87.3
% to 99.07 %, which shows the efficacy of our proposed method. |
---|