• Sorted by Date • Classified by Publication Type • Classified by Topic • Grouped by Student (current) • Grouped by Former Students •
Gal A. Kaminka,
Dan Erusalimchik, and
Sarit Kraus.
Adaptive Multi-Robot Coordination: A Game-Theoretic Perspective.
In
Proceedings of IEEE International Conference on Robotics and Automation
(ICRA-10), 2010.
A slightly different version of this paper also appears in Proceedings of
the Adaptive Agents Workshop of the AAMAS 2009 Conference.
Multi-robot systems researchers have been investigating adaptive coordination methods for improving spatial coordination in teams. Such methods adapt the coordination method to the dynamicchanges in density of the robots. Unfortunately, while their empirical success is evident, none ofthese methods has been understood in the context of existing formal work on multi-robot learning. Thispaper presents a reinforcement-learning approach to coordination algorithm selection, whichis not only shown to work well in experiments, but is also analytically grounded. We present areward function (Effectiveness Index, EI), that reduces time and resources spent coordinating,and maximizes the time between conflicts that require coordination. It does this by measuringthe resource spending velocity. We empirically show its success in several domains, includingrobots in virtual worlds, simulated robots, and physical AIBO robots executing foraging. In addition,we analytically explore the reasons that EI works well. We show that under some assumptions, spatialcoordination opportunities can be modeled as matrix games in which the payoffs to the robots are unknown,but are directly a function of EI estimates. The use of reinforcement learning leads to robots maximizingtheir EI rewards in equilibrium. This work is a step towards bridging the gap between the theoreticalstudy of interactions, and their use in multi-robot coordination.
@InProceedings{icra10dan, author = {Gal A. Kaminka and Dan Erusalimchik and Sarit Kraus}, title = {Adaptive Multi-Robot Coordination: A Game-Theoretic Perspective}, booktitle = ICRA-10, OPTcrossref = {}, OPTkey = {}, OPTpages = {}, year = {2010}, OPTeditor = {}, OPTvolume = {}, OPTnumber = {}, OPTseries = {}, OPTaddress = {}, OPTmonth = {}, OPTorganization = {}, OPTpublisher = {}, OPTnote = {}, OPTannote = {}, OPTurl = {}, OPTdoi = {}, OPTissn = {}, OPTlocalfile = {}, abstract = {Multi-robot systems researchers have been investigating adaptive coordination methods for improving spatial coordination in teams. Such methods adapt the coordination method to the dynamic changes in density of the robots. Unfortunately, while their empirical success is evident, none of these methods has been understood in the context of existing formal work on multi-robot learning. This paper presents a reinforcement-learning approach to coordination algorithm selection, which is not only shown to work well in experiments, but is also analytically grounded. We present a reward function (\emph{Effectiveness Index}, EI), that reduces time and resources spent coordinating, and maximizes the time between conflicts that require coordination. It does this by measuring \emph{the resource spending velocity}. We empirically show its success in several domains, including robots in virtual worlds, simulated robots, and physical AIBO robots executing foraging. In addition, we analytically explore the reasons that EI works well. We show that under some assumptions, spatial coordination opportunities can be modeled as matrix games in which the payoffs to the robots are unknown, but are directly a function of EI estimates. The use of reinforcement learning leads to robots maximizing their EI rewards in equilibrium. This work is a step towards bridging the gap between the theoretical study of interactions, and their use in multi-robot coordination. }, wwwnote = {A slightly different version of this paper also appears in Proceedings of the Adaptive Agents Workshop of the AAMAS 2009 Conference.}, }
Generated by bib2html.pl (written by Patrick Riley ) on Fri Aug 30, 2024 17:29:51