Gal A. Kaminka's Publications

Sorted by DateClassified by Publication TypeClassified by TopicGrouped by Student (current)Grouped by Former Students

Reinforcement Learning of Multi-Robot Coordination Based on Resource Spending Velocity

Dan Erusalimchik. Reinforcement Learning of Multi-Robot Coordination Based on Resource Spending Velocity. Master's Thesis, Bar Ilan University,2009.

Download

[PDF]781.7kB  

Abstract

Multi-robot systems researchers have been investigating adaptive coordination methods for improving spatial coordination in teams. Such methods utilize learning to improve selection of the coordination method, given the dynamic changes in density of the robots. Unfortunately, while their empirical success is evident, none of these methods has been understood in the context of existing formal work on multi-robot learning. % and adaptation. This paper presents a reinforcement-learning approach to coordination algorithm selection, which is not only shown to work well in experiments, but is also analytically grounded. We present a reward function (Effectiveness Index, EI), that reduces time and resources spent coordinating, and maximizes the time between conflicts that require coordination. It does this by measuring the resource-spending velocity. We empirically show its successful use in stateless reinforcement learning, in several domains, including robots in virtual worlds, simulated robots, and physical AIBO robots executing foraging. In addition, we analytically explore the reasons that EI works well. We show that under some assumptions,spatial coordination opportunities can be modeled as matrix games in which the payoffs to the robots are unknown, but are directly a function of EI estimates. The use of reinforcement learning leads to robots maximizing their EI rewards in equilibrium. We then apply the EI reward function in full multi-state reinforcement learning, and demonstratethat it can be used in settings requiring tight coordination between the robots.This work is a step towards bridging the gap between the theoretical studyof interactions, and their use in multi-robot coordination.

Additional Information

BibTeX

@MastersThesis{danerusalimchik-msc,
author = {Dan Erusalimchik},
title = {Reinforcement Learning of Multi-Robot Coordination Based on Resource Spending Velocity},
school = {{B}ar {I}lan {U}niversity},
year = {2009},
OPTkey = {},
OPTtype = {},
OPTaddress = {},
OPTmonth = {},
OPTnote = {},
OPTannote = {},
  wwwnote = {}, 
  abstract = { Multi-robot systems researchers have been investigating adaptive coordination methods for improving spatial coordination in teams. Such methods utilize learning to improve selection of the coordination method, given the dynamic changes in density of the robots. Unfortunately, while their empirical success is evident, none of these methods has been understood in the context of existing formal work on multi-robot learning. % and adaptation.  
This paper presents a reinforcement-learning approach to coordination algorithm selection, which 
is not only shown to work well in experiments, but is also analytically grounded. We present a  reward function (\emph{Effectiveness Index}, EI), that reduces time and resources spent coordinating, and maximizes the time between conflicts that require coordination. It does this by measuring \emph{the resource-spending velocity}.  We empirically show its successful use in stateless reinforcement learning, in several domains, including robots in virtual worlds, simulated robots, and physical AIBO robots executing foraging. In addition, we analytically explore the reasons that EI works well.  We show that under some assumptions,
spatial coordination opportunities can be modeled as matrix games in which the payoffs to the robots are unknown, but 
are directly a function of EI estimates. The use of reinforcement learning leads to robots maximizing their EI rewards in equilibrium. We then apply the EI reward function in full multi-state reinforcement learning, and demonstrate
that it can be used in settings requiring tight coordination between the robots.
This work is a step towards bridging the gap between the theoretical study
of interactions, and their use in multi-robot coordination.}
}

Generated by bib2html.pl (written by Patrick Riley ) on Sun Jul 23, 2017 22:08:49