Emergence of discrete and abstract state representation through reinforcement learning in a continuous input task
Link to publisher's homepage at http://link.springer.com/
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Springer-Verlag
2014
|
Subjects: | |
Online Access: | http://dspace.unimap.edu.my:80/dspace/handle/123456789/35395 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
my.unimap-35395 |
---|---|
record_format |
dspace |
spelling |
my.unimap-353952014-06-11T13:45:49Z Emergence of discrete and abstract state representation through reinforcement learning in a continuous input task Sawatsubashi, Yoshito Mohamad Faizal, Samsudin Shibata, Katsunari bashis8@yahoo.co.jp ballack83@hotmail.co.jp faizalsamsudin@unimap.edu.my Action planning Concept formation Continuous input Hidden neurons Link to publisher's homepage at http://link.springer.com/ "Concept" is a kind of discrete and abstract state representation, and is considered useful for efficient action planning. However, it is supposed to emerge in our brain as a parallel processing and learning system through learning based on a variety of experiences, and so it is difficult to be developed by hand-coding. In this paper, as a previous step of the "concept formation", it is investigated whether the discrete and abstract state representation is formed or not through learning in a task with multi-step state transitions using Actor-Q learning method and a recurrent neural network. After learning, an agent repeated a sequence two times, in which it pushed a button to open a door and moved to the next room, and finally arrived at the third room to get a reward. In two hidden neurons, discrete and abstract state representation not depending on the door opening pattern was observed. The result of another learning with two recurrent neural networks that are for Q-values and for Actors suggested that the state representation emerged to generate appropriate Q-values. 2014-06-11T13:45:49Z 2014-06-11T13:45:49Z 2013 Article Advances in Intelligent Systems and Computing, vol. 208, 2013, pages 13-21 978-364237373-2 2194-5357 http://link.springer.com/chapter/10.1007%2F978-3-642-37374-9_2 http://dspace.unimap.edu.my:80/dspace/handle/123456789/35395 en Springer-Verlag |
institution |
Universiti Malaysia Perlis |
building |
UniMAP Library |
collection |
Institutional Repository |
continent |
Asia |
country |
Malaysia |
content_provider |
Universiti Malaysia Perlis |
content_source |
UniMAP Library Digital Repository |
url_provider |
http://dspace.unimap.edu.my/ |
language |
English |
topic |
Action planning Concept formation Continuous input Hidden neurons |
spellingShingle |
Action planning Concept formation Continuous input Hidden neurons Sawatsubashi, Yoshito Mohamad Faizal, Samsudin Shibata, Katsunari Emergence of discrete and abstract state representation through reinforcement learning in a continuous input task |
description |
Link to publisher's homepage at http://link.springer.com/ |
author2 |
bashis8@yahoo.co.jp |
author_facet |
bashis8@yahoo.co.jp Sawatsubashi, Yoshito Mohamad Faizal, Samsudin Shibata, Katsunari |
format |
Article |
author |
Sawatsubashi, Yoshito Mohamad Faizal, Samsudin Shibata, Katsunari |
author_sort |
Sawatsubashi, Yoshito |
title |
Emergence of discrete and abstract state representation through reinforcement learning in a continuous input task |
title_short |
Emergence of discrete and abstract state representation through reinforcement learning in a continuous input task |
title_full |
Emergence of discrete and abstract state representation through reinforcement learning in a continuous input task |
title_fullStr |
Emergence of discrete and abstract state representation through reinforcement learning in a continuous input task |
title_full_unstemmed |
Emergence of discrete and abstract state representation through reinforcement learning in a continuous input task |
title_sort |
emergence of discrete and abstract state representation through reinforcement learning in a continuous input task |
publisher |
Springer-Verlag |
publishDate |
2014 |
url |
http://dspace.unimap.edu.my:80/dspace/handle/123456789/35395 |
_version_ |
1643797782117482496 |
score |
13.222552 |