|
Title: |
Transferring Task Models in Reinforcement Learning Agents |
Author(s): |
A. Fachantidis, I. Partalas, G. Tsoumakas, I. Vlahavas.
|
Availability: |
Click here to download the PDF (Acrobat Reader) file (21 pages).
|
Keywords: |
|
Appeared in: |
Neurocomputing, Elsevier, 107, pp. 23-32, 2013.
|
Abstract: |
The main objective of Transfer Learning is to reuse knowledge acquired in a previous learned
task, in order to enhance the learning procedure in a new and more complex task. Transfer learning
comprises a suitable solution for speeding up the learning procedure in Reinforcement Learning tasks.
In this work, we propose a novel method for transferring models to Reinforcement Learning agents.
The models of the transition and reward functions of a source task, will be transferred to a relevant but
different target task. The learning algorithm of the target task's agent takes a hybrid approach,
implementing both model-free and model-based learning, in order to fully exploit the presence of a
source task model. Moreover, a novel method is proposed for transferring models of potential-based,
reward shaping functions.
The empirical evaluation, of the proposed approaches, demonstrated significant results and
performance improvements in the 3D Mountain Car and Server Job Scheduling tasks, by successfully
using the models generated from their corresponding source tasks. |
See also : |
|
|
|