Incremental multi-step Q-learning

Jing Peng, Ronald J. Williams

Research output: Contribution to journalArticlepeer-review

197 Scopus citations

Abstract

This paper presents a novel incremental algorithm that combines Q-learning, a well-known dynamic-programming based reinforcement learning method, with the TD(λ) return estimation process, which is typically used in actor-critic learning, another well-known dynamic-programming based reinforcement learning method. The parameter λ is used to distribute credit throughout sequences of actions, leading to faster learning and also helping to alleviate the non-Markovtan effect of coarse state-space quantization. The resulting algorithm, Q(λ)-learning, thus combines some of the best features of the Q-learning and actor critic learning paradigms. The behavior of this algorithm has been demonstrated through computer simulations.

Original languageEnglish
Pages (from-to)283-290
Number of pages8
JournalMachine Learning
Volume22
Issue number1-3
DOIs
StatePublished - 1996

Keywords

  • Reinforcement learning
  • Temporal difference learning

Fingerprint

Dive into the research topics of 'Incremental multi-step Q-learning'. Together they form a unique fingerprint.

Cite this