IMI-BAS BAS
 

BulDML at Institute of Mathematics and Informatics >
ITHEA >
International Journal ITA >
2004 >
Volume 11 Number 1 >

Please use this identifier to cite or link to this item: http://hdl.handle.net/10525/851

Title: Local Goals Driven Hierarchical Reinforcement Learning
Authors: Pchelkin, Arthur
Keywords: Reinforcement Learning
Hierarchical Behaviour
Efficient Exploration
POMDPs
Non-Markov
Local Goals
Internal Reward
Subgoal Learning
Issue Date: 2004
Publisher: Institute of Information Theories and Applications FOI ITHEA
Abstract: Efficient exploration is of fundamental importance for autonomous agents that learn to act. Previous approaches to exploration in reinforcement learning usually address exploration in the case when the environment is fully observable. In contrast, the current paper, like the previous paper [Pch2003], studies the case when the environment is only partially observable. One additional difficulty is considered – complex temporal dependencies. In order to overcome this additional difficulty a new hierarchical reinforcement learning algorithm is proposed. The learning algorithm exploits a very simple learning principle, similar to Q-learning, except the lookup table has one more variable – the currently selected goal. Additionally, the algorithm uses the idea of internal reward for achieving hard-to-reach states [Pch2003]. The proposed learning algorithm is experimentally investigated in partially observable maze problems where it shows a robust ability to learn a good policy.
Description: * This research was partially supported by the Latvian Science Foundation under grant No.02-86d.
URI: http://hdl.handle.net/10525/851
ISSN: 1313-0463
Appears in Collections:Volume 11 Number 1

Files in This Item:

File Description SizeFormat
ijita11-1-p17.pdf132.2 kBAdobe PDFView/Open

 



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

 

Valid XHTML 1.0!   Creative Commons License