Vision-Based Autonomous Navigation Approach for a Tracked Robot Using Deep Reinforcement Learning
Tracked robots need to achieve safe autonomous steering in various changing environments. In this article, a novel end-to-end network architecture is proposed for tracked robots to learn collision-free autonomous navigation through deep reinforcement learning. Specifically, this research improved th...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Published: |
Institute of Electrical and Electronics Engineers Inc.
2021
|
Online Access: | https://www.scopus.com/inward/record.uri?eid=2-s2.0-85098142899&doi=10.1109%2fJSEN.2020.3016299&partnerID=40&md5=485df683ab0c1b1d1a6d2bb15a4d1b8a http://eprints.utp.edu.my/23687/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.utp.eprints.23687 |
---|---|
record_format |
eprints |
spelling |
my.utp.eprints.236872021-08-19T08:20:07Z Vision-Based Autonomous Navigation Approach for a Tracked Robot Using Deep Reinforcement Learning Ejaz, M.M. Tang, T.B. Lu, C.-K. Tracked robots need to achieve safe autonomous steering in various changing environments. In this article, a novel end-to-end network architecture is proposed for tracked robots to learn collision-free autonomous navigation through deep reinforcement learning. Specifically, this research improved the learning time and exploratory nature of the robot by normalizing the input data and injecting parametric noise into the network parameters. Features were extracted from the four consecutive depth images by deep convolutional neural networks, which were used to derive the tracked robot. In addition, a comparison was made between three Q-variant models in terms of average reward, variance, and dispersion across episodes. Also, a detailed statistical analysis was performed to measure the reliability of all the models. The proposed model was superior in all the environments. It is worth noting that our proposed model, layer normalisation dueling double deep Q-network (LND3QN), could be directly transferred to a real robot without any fine-tuning after being trained in a simulation environment. The proposed model also demonstrated outstanding performance in several cluttered real-world environments considering both static and dynamic obstacles. © 2001-2012 IEEE. Institute of Electrical and Electronics Engineers Inc. 2021 Article NonPeerReviewed https://www.scopus.com/inward/record.uri?eid=2-s2.0-85098142899&doi=10.1109%2fJSEN.2020.3016299&partnerID=40&md5=485df683ab0c1b1d1a6d2bb15a4d1b8a Ejaz, M.M. and Tang, T.B. and Lu, C.-K. (2021) Vision-Based Autonomous Navigation Approach for a Tracked Robot Using Deep Reinforcement Learning. IEEE Sensors Journal, 21 (2). pp. 2230-2240. http://eprints.utp.edu.my/23687/ |
institution |
Universiti Teknologi Petronas |
building |
UTP Resource Centre |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Teknologi Petronas |
content_source |
UTP Institutional Repository |
url_provider |
http://eprints.utp.edu.my/ |
description |
Tracked robots need to achieve safe autonomous steering in various changing environments. In this article, a novel end-to-end network architecture is proposed for tracked robots to learn collision-free autonomous navigation through deep reinforcement learning. Specifically, this research improved the learning time and exploratory nature of the robot by normalizing the input data and injecting parametric noise into the network parameters. Features were extracted from the four consecutive depth images by deep convolutional neural networks, which were used to derive the tracked robot. In addition, a comparison was made between three Q-variant models in terms of average reward, variance, and dispersion across episodes. Also, a detailed statistical analysis was performed to measure the reliability of all the models. The proposed model was superior in all the environments. It is worth noting that our proposed model, layer normalisation dueling double deep Q-network (LND3QN), could be directly transferred to a real robot without any fine-tuning after being trained in a simulation environment. The proposed model also demonstrated outstanding performance in several cluttered real-world environments considering both static and dynamic obstacles. © 2001-2012 IEEE. |
format |
Article |
author |
Ejaz, M.M. Tang, T.B. Lu, C.-K. |
spellingShingle |
Ejaz, M.M. Tang, T.B. Lu, C.-K. Vision-Based Autonomous Navigation Approach for a Tracked Robot Using Deep Reinforcement Learning |
author_facet |
Ejaz, M.M. Tang, T.B. Lu, C.-K. |
author_sort |
Ejaz, M.M. |
title |
Vision-Based Autonomous Navigation Approach for a Tracked Robot Using Deep Reinforcement Learning |
title_short |
Vision-Based Autonomous Navigation Approach for a Tracked Robot Using Deep Reinforcement Learning |
title_full |
Vision-Based Autonomous Navigation Approach for a Tracked Robot Using Deep Reinforcement Learning |
title_fullStr |
Vision-Based Autonomous Navigation Approach for a Tracked Robot Using Deep Reinforcement Learning |
title_full_unstemmed |
Vision-Based Autonomous Navigation Approach for a Tracked Robot Using Deep Reinforcement Learning |
title_sort |
vision-based autonomous navigation approach for a tracked robot using deep reinforcement learning |
publisher |
Institute of Electrical and Electronics Engineers Inc. |
publishDate |
2021 |
url |
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85098142899&doi=10.1109%2fJSEN.2020.3016299&partnerID=40&md5=485df683ab0c1b1d1a6d2bb15a4d1b8a http://eprints.utp.edu.my/23687/ |
_version_ |
1738656507423096832 |
score |
13.211869 |