A Study on Abstract Policy for Acceleration of Reinforcement Learning

Reinforcement learning (RL) is well known as one of the methods that can be applied to unknown problems. However, because optimization at every state requires trial-and-error, the learning time becomes large when environment has many states. If there exist solutions to similar problems and they are...

Full description

Saved in:
Bibliographic Details
Main Authors: Ahmad Afif, Mohd Faudzi, Hirotaka, Takano, Junichi, Murata
Format: Conference or Workshop Item
Language:English
Published: 2014
Subjects:
Online Access:http://umpir.ump.edu.my/id/eprint/7452/1/A_Study_on_Abstract_Policy_for_Acceleration_of_Reinforcement_Learning.pdf
http://umpir.ump.edu.my/id/eprint/7452/
http://dx.doi.org/10.1109/SICE.2014.6935300
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Reinforcement learning (RL) is well known as one of the methods that can be applied to unknown problems. However, because optimization at every state requires trial-and-error, the learning time becomes large when environment has many states. If there exist solutions to similar problems and they are used during the exploration, some of trial-anderror can be spared and the learning can take a shorter time. In this paper, the authors propose to reuse an abstract policy, a representative of a solution constructed by learning vector quantization (LVQ) algorithm, to improve initial performance of an RL learner in a similar but different problem. Furthermore, it is investigated whether or not the policy can adapt to a new environment while preserving its performance in the old environments. Simulations show good result in terms of the learning acceleration and the adaptation of abstract policy.