Adaptive Policy Gradient in Multiagent Learning

Bikramjit Banerjee, Jing Peng

Research output: Contribution to conferencePaperpeer-review

44 Scopus citations

Abstract

Inspired by the recent results in policy gradient learning in a general-sum game scenario, in the form of two algorithms, IGA and WoLF-IGA, we explore an alternative version of WoLF. We show that our new WoLF criterion (PDWoLF) is also accurate in 2 × 2 games, while being accurately computable even in more than 2-action games, unlike WoLF that relies on estimation. In particular, we show that this difference in accuracy in more than 2-action games translates to faster convergence (to Nash equilibrium policies in self-play) for PDWoLF in conjunction with the general Policy Hill Climbing algorithm. Interestingly, this expedience gets more pronounced with increasing learning rate ratio, for which we also delve into an explanation, We also show experimentally that learning faster with PDWoLF could also entail learning better policies earlier in self play. Finally we present the scalable version of PDWoLF and show that even in such domains requiring generalizations and approximations, PDWoLF could dominate WoLF in performance.

Original languageEnglish
Pages686-692
Number of pages7
DOIs
StatePublished - 2003
EventProceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 03 - Melbourne, Vic., Australia
Duration: 14 Jul 200318 Jul 2003

Other

OtherProceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 03
Country/TerritoryAustralia
CityMelbourne, Vic.
Period14/07/0318/07/03

Keywords

  • Game Theory
  • Gradient Ascent Learning
  • Nash Equilibria

Fingerprint

Dive into the research topics of 'Adaptive Policy Gradient in Multiagent Learning'. Together they form a unique fingerprint.

Cite this