Abstract
We provide a uniform framework for learning against a recent history adversary in arbitrary repeated bimatrix games, by modeling such an agent as a Markov Decision Process. We focus on learning an optimal non-stationary policy in such an MDP over a finite horizon and adapt an existing efficient Monte Carlo based algorithm for learning optimal policies in such MDPs. We show that this new efficient algorithm can obtain higher average rewards than a previously known efficient algorithm against some opponents in the contract game. Though this improvement comes at the cost of increased domain knowledge, a simple experiment in the Prisoner's Dilemma game shows that even when no extra domain knowledge (besides that the opponent's memory size is known) is assumed, the error can still be small.
Original language | English |
---|---|
Pages | 209-215 |
Number of pages | 7 |
State | Published - 2005 |
Event | 4th International Conference on Autonomous Agents and Multi agent Systems, AAMAS 05 - Utrecht, Netherlands Duration: 25 Jul 2005 → 29 Jul 2005 |
Other
Other | 4th International Conference on Autonomous Agents and Multi agent Systems, AAMAS 05 |
---|---|
Country/Territory | Netherlands |
City | Utrecht |
Period | 25/07/05 → 29/07/05 |
Keywords
- Efficient Learning
- Game Theory
- Multiagent Learning