Abstract
Inspired by the recent results in policy gradient learning in a general-sum game scenario, in the form of two algorithms, IGA and WoLF-IGA, we explore an alternative version of WoLF. We show that our new WoLF criterion (PDWoLF) is also accurate in 2 × 2 games, while being accurately computable even in more than 2-action games, unlike WoLF that relies on estimation. In particular, we show that this difference in accuracy in more than 2-action games translates to faster convergence (to Nash equilibrium policies in self-play) for PDWoLF in conjunction with the general Policy Hill Climbing algorithm. Interestingly, this expedience gets more pronounced with increasing learning rate ratio, for which we also delve into an explanation, We also show experimentally that learning faster with PDWoLF could also entail learning better policies earlier in self play. Finally we present the scalable version of PDWoLF and show that even in such domains requiring generalizations and approximations, PDWoLF could dominate WoLF in performance.
Original language | English |
---|---|
Pages | 686-692 |
Number of pages | 7 |
DOIs | |
State | Published - 2003 |
Event | Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 03 - Melbourne, Vic., Australia Duration: 14 Jul 2003 → 18 Jul 2003 |
Other
Other | Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 03 |
---|---|
Country/Territory | Australia |
City | Melbourne, Vic. |
Period | 14/07/03 → 18/07/03 |
Keywords
- Game Theory
- Gradient Ascent Learning
- Nash Equilibria