Solving the optimal path planning of a mobile robot using improved Q-learning

Q-learning, a type of reinforcement learning, has gained increasing popularity in autonomous mobile robot path planning recently, due to its self-learning ability without requiring a priori model of the environment. Yet, despite such advantage, Q-learning exhibits slow convergence to the optimal sol...

Full description

Saved in:
Bibliographic Details
Main Authors: Low, Ee Soong, Ong, Pauline, Cheah, Kah Chun
Format: Article
Language:en
Published: Elsevier 2019
Subjects:
Online Access:http://eprints.uthm.edu.my/4217/1/AJ%202019%20%28253%29.pdf
http://eprints.uthm.edu.my/4217/
https://doi.org/10.1016/j.robot.2019.02.013
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Q-learning, a type of reinforcement learning, has gained increasing popularity in autonomous mobile robot path planning recently, due to its self-learning ability without requiring a priori model of the environment. Yet, despite such advantage, Q-learning exhibits slow convergence to the optimal solution. In order to address this limitation, the concept of partially guided Q-learning is introduced wherein, the flower pollination algorithm (FPA) is utilized to improve the initialization of Q-learning. Experimental evaluation of the proposed improved Q-learning under the challenging environment with a different layout of obstacles shows that the convergence of Q-learning can be accelerated when Q-values are initialized appropriately using the FPA. Additionally, the effectiveness of the proposed algorithm is validated in a real-world experiment using a three-wheeled mobile robot.