Abstract
When several agents learn concurrently, the payoff received by an agent is dependent on the behavior of the other agents. As the other agents learn, the reward of one agent becomes non-stationary. This makes learning in multiagent systems more difficult than single-agent learning. A few methods, however, are known to guarantee convergence to equilibrium in the limit in such systems. In this paper we experimentally study one such technique, the minimax-Q, in a competitive domain and prove its equivalence with another well-known method for competitive domains. We study the rate of convergence of minimax-Q and investigate possible ways for increasing the same. We also present a variant of the algorithm, minimax-SARSA, and prove its convergence to minimax-Q values under appropriate conditions. Finally we show that this new algorithm performs better than simple minimax-Q in a general-sum domain as well.
Original language | English |
---|---|
Pages (from-to) | 825-830 |
Number of pages | 6 |
Journal | IJCAI International Joint Conference on Artificial Intelligence |
State | Published - 2001 |
Event | 17th International Joint Conference on Artificial Intelligence, IJCAI 2001 - Seattle, WA, United States Duration: 4 Aug 2001 → 10 Aug 2001 |