# Gal A. Kaminka's Publications

Sorted by DateClassified by Publication TypeClassified by TopicGrouped by Student (current)Grouped by Former Students

## Towards Adaptive Multi-Robot Coordination Based on Resource Expenditure Velocity

Dan Erusalimchik and Gal A. Kaminka. Towards Adaptive Multi-Robot Coordination Based on Resource Expenditure Velocity. In Proceedings of the Tenth Conference on Intelligent Autonomous Systems (IAS-10), IOS Press, 2008.

### Abstract

In the research area of multi-robot systems, several researchers have reported on consistent success in using heuristic measures to improve loose coordination in teams, by minimizing coordination costs using various heuristic techniques. While these heuristic methods has proven successful in several domains, they have never been formalized, nor have they been put in context of existing work on adaptation and learning. As a result, the conditions for their use remain unknown.We posit that in fact all of these different heuristic methods are instances of reinforcement learning in a one-stage MDP game, with the specific heuristic functions used as rewards. We show that a specific reward function---which we call Effectiveness Index (EI)---is an appropriate reward function for learning to select between coordination methods. EI estimates the resource-spending velocity by a coordination algorithm, and allows minimization of this velocity using familiar reinforcement learning algorithms (in our case, Q-learning in one-stage MDP).The paper analytically and empirically argues for the use of EI by proving that under certain conditions, maximizing this reward leads to greater utility in the task. We report on initial experiments that demonstrate that EI indeed overcomes limitations in previous work, and outperforms it in different cases.

### BibTeX

@InCollection{ias08dan,
author = {Dan Erusalimchik and Gal A. Kaminka},
title = {Towards Adaptive Multi-Robot Coordination Based on Resource Expenditure Velocity},
booktitle = IAS-10,
abstract = {In the research area of multi-robot systems, several researchers have reported on consistent success in using heuristic measures to improve loose coordination in teams, by minimizing coordination costs using various heuristic techniques. While these heuristic methods has proven successful in several domains, they have  never been formalized, nor have they been put in context of existing work on adaptation and learning. As a result, the conditions for their use remain unknown.
We posit that in fact all of these different heuristic methods are instances of reinforcement learning in a one-stage MDP game, with the specific heuristic functions used as rewards. We show that a specific reward function---which we call \emph{Effectiveness Index} (EI)---is an appropriate reward function for learning to select between coordination methods. EI estimates the \emph{resource-spending velocity} by a coordination algorithm, and allows minimization of this velocity using familiar reinforcement learning algorithms (in our case, Q-learning in one-stage MDP).
The paper analytically and empirically argues for the use of EI by proving that under certain conditions, maximizing this reward leads to greater utility in the task. We report on initial experiments that demonstrate that EI indeed overcomes limitations in previous work, and outperforms it in different cases.},
wwwnote = {},
OPTcrossref = {},
OPTkey = {},
OPTpages = {},
publisher = {{IOS} {P}ress},
year = {2008},
OPTeditor = {},
OPTvolume = {},
OPTnumber = {},
OPTseries = {},
OPTtype = {},
OPTchapter = {},