Fast concurrent reinforcement learners

Bikramjit Banerjee, Sandip Sen, Jing Peng

Research output: Contribution to journalConference articlepeer-review

29 Scopus citations

Abstract

When several agents learn concurrently, the payoff received by an agent is dependent on the behavior of the other agents. As the other agents learn, the reward of one agent becomes non-stationary. This makes learning in multiagent systems more difficult than single-agent learning. A few methods, however, are known to guarantee convergence to equilibrium in the limit in such systems. In this paper we experimentally study one such technique, the minimax-Q, in a competitive domain and prove its equivalence with another well-known method for competitive domains. We study the rate of convergence of minimax-Q and investigate possible ways for increasing the same. We also present a variant of the algorithm, minimax-SARSA, and prove its convergence to minimax-Q values under appropriate conditions. Finally we show that this new algorithm performs better than simple minimax-Q in a general-sum domain as well.

Original languageEnglish
Pages (from-to)825-830
Number of pages6
JournalIJCAI International Joint Conference on Artificial Intelligence
StatePublished - 2001
Event17th International Joint Conference on Artificial Intelligence, IJCAI 2001 - Seattle, WA, United States
Duration: 4 Aug 200110 Aug 2001

Fingerprint

Dive into the research topics of 'Fast concurrent reinforcement learners'. Together they form a unique fingerprint.

Cite this