• Sorted by Date • Classified by Publication Type • Classified by Topic • Grouped by Student (current) • Grouped by Former Students •
Gal A. Kaminka and Yinon Douchan. Heterogeneous Foraging Swarms Can be Better. Frontiers in Robotics and AI, 2024. In Press
(unavailable)
Inspired by natural phenomena, generations of researchers have been investigating how a swarm of robots can act coherently and purposefully, when individual robots can only sense and communicate with nearby peers, with no means for global communications and coordination. In this paper, we will show how swarms can perform better, when they self-adapt to admit heterogeneous behavior roles.We begin by modeling a foraging swarm task as an extensive-form cooperative game, in which the swarm goal is an additive function of individual contributions (the sum of collected items). If robots could predict future collisions and their outcomes, they might use the model in principle to make optimal collision-avoidance decisions. In practice, swarm robots cannot make such predictions, as they lack global (swarm-wide) information, and can only sense locally. Previous work proposed each robot adapts its own collision-avoidance decisions based on the Effectiveness Index reward, that uses information about the time between their own collisions (information readily available even to simple physical robots). While promising, the use of EI proves brittle (as we show), since robots that selfishly seek to optimize their own EI (minimizing time spent on collisions) can actually cause swarm-wide performance to degrade. To address this, we derive a reward function from a game-theoretic view of swarm foraging as a fully-cooperative repeating game with an unknown horizon. We demonstrate analytically that the total coordination overhead of the swarm (total time spent on collision-avoidance, rather than foraging per-se) is directly tied to the total utility of the swarm: less overhead, more items collected. Treating every collision as a stage in the repeating game, the overhead is bounded by the total EI of all robots. We then use a marginal-contribution (difference-reward) formulation to derive individual rewards from the total EI. The resulting Aligned Effective Index ($ÆI$) reward has the property that each individual can estimate the impact of its decisions on the swarm: individual improvements translate to swarm improvements. We show that $ÆI$ provably generalizes previous work, adding a component that computes the effect of counterfactual robot absence. Different assumptions on this counterfactual lead to bounds on $ÆI$ from above and below. While the theoretical analysis clarifies both assumptions and gaps with respect to the reality of robots, experiments with real and simulated robots empirically demonstrate the efficacy of the approach in practice, and the importance of behavioral (decision-making) diversity in optimizing swarm goals.
@article{frontiers24, author = {Gal A. Kaminka and Yinon Douchan}, title = {Heterogeneous Foraging Swarms Can be Better}, year = {2024}, journal = {Frontiers in Robotics and AI}, OPTvolume = {}, OPTnumber = {}, OPTwwwnote = {}, note = {In Press}, abstract = { Inspired by natural phenomena, generations of researchers have been investigating how a swarm of robots can act coherently and purposefully, when individual robots can only sense and communicate with nearby peers, with no means for global communications and coordination. In this paper, we will show how swarms can perform better, when they self-adapt to admit heterogeneous behavior roles. We begin by modeling a foraging swarm task as an extensive-form cooperative game, in which the swarm goal is an additive function of individual contributions (the sum of collected items). If robots could predict future collisions and their outcomes, they might use the model in principle to make optimal collision-avoidance decisions. In practice, swarm robots cannot make such predictions, as they lack global (swarm-wide) information, and can only sense locally. Previous work proposed each robot adapts its own collision-avoidance decisions based on the \emph{Effectiveness Index} reward, that uses information about the time between their own collisions (information readily available even to simple physical robots). While promising, the use of EI proves brittle (as we show), since robots that selfishly seek to optimize their own EI (minimizing time spent on collisions) can actually cause swarm-wide performance to degrade. To address this, we derive a reward function from a game-theoretic view of swarm foraging as a fully-cooperative repeating game with an unknown horizon. We demonstrate analytically that the total coordination overhead of the swarm (total time spent on collision-avoidance, rather than foraging per-se) is directly tied to the total utility of the swarm: less overhead, more items collected. Treating every collision as a stage in the repeating game, the overhead is bounded by the total EI of all robots. We then use a marginal-contribution (difference-reward) formulation to derive individual rewards from the total EI. The resulting \emph{Aligned} Effective Index ($\AEI$) reward has the property that each individual can estimate the impact of its decisions on the swarm: individual improvements translate to swarm improvements. We show that $\AEI$ provably generalizes previous work, adding a component that computes the effect of counterfactual robot absence. Different assumptions on this counterfactual lead to bounds on $\AEI$ from above and below. While the theoretical analysis clarifies both assumptions and gaps with respect to the reality of robots, experiments with real and simulated robots empirically demonstrate the efficacy of the approach in practice, and the importance of behavioral (decision-making) diversity in optimizing swarm goals.}, }
Generated by bib2html.pl (written by Patrick Riley ) on Mon Dec 23, 2024 22:35:41