Abstract
This paper presents a novel incremental algorithm that combines Q-learning, a well-known dynamic-programming based reinforcement learning method, with the TD(λ) return estimation process, which is typically used in actor-critic learning, another well-known dynamic-programming based reinforcement learning method. The parameter λ is used to distribute credit throughout sequences of actions, leading to faster learning and also helping to alleviate the non-Markovtan effect of coarse state-space quantization. The resulting algorithm, Q(λ)-learning, thus combines some of the best features of the Q-learning and actor critic learning paradigms. The behavior of this algorithm has been demonstrated through computer simulations.
Original language | English |
---|---|
Pages (from-to) | 283-290 |
Number of pages | 8 |
Journal | Machine Learning |
Volume | 22 |
Issue number | 1-3 |
DOIs | |
State | Published - 1996 |
Keywords
- Reinforcement learning
- Temporal difference learning