Gal A. Kaminka: Publications

Sorted by DateClassified by Publication TypeClassified by TopicGrouped by Student (current)Grouped by Former Students

The Effectiveness Index Intrinsic Reward for Coordinating Service Robots

Yinon Douchan and Gal A. Kaminka. The Effectiveness Index Intrinsic Reward for Coordinating Service Robots. In Spring Berman, Melvin Gauci, Emilio Frazzoli, Andreas Kolling, Roderich Gross, Alcherio Martinoli, and Fumitoshi Matsuno, editors, 13th International Symposium on Distributed Autonomous Robotic Systems (DARS-2016), Springer, November 2016.

Download

[PDF]137.8kB  

Abstract

Modern multi-robot service robotics applications often rely on coordination capabilities at multiple levels, from global (system-wide) task allocation and selection, to local (nearby) spatial coordination to avoid collisions. Often, the global methods are considered to be the heart of the multi-robot system, while local methods are tacked on to overcome intermittent, spatially-limited hindrances. We tackle this general assumption. Utilizing the alphabet soup simulator (simulating order picking, made famous by Kiva Systems), we experiment with a set of myopic, local methods for obstacle avoidance. We report on a series of experiments with a reinforcement-learning approach, using the effectiveness-index intrinsic reward, to allow robots to learn to select between methods to use when avoiding collisions. We show that allowing the learner to explore the space of parameterized methods results in significant improvements, even compared to the original methods provided by the simulator.

Additional Information

BibTeX

@InCollection{dars16yinon,
author = {Yinon Douchan and Gal A. Kaminka},
title = {The Effectiveness Index Intrinsic Reward for Coordinating Service Robots},
booktitle = DARS-16,
OPTcrossref = {crossref},
OPTkey = {key},
OPTpages = {pages},
publisher = {Springer},
year = {2016},
editor = {Spring Berman and Melvin Gauci and Emilio Frazzoli and Andreas Kolling and Roderich Gross and Alcherio Martinoli and Fumitoshi Matsuno},
OPTvolume = {volume},
OPTnumber = {number},
OPTseries = {series},
OPTtype = {type},
OPTchapter = {chapter},
OPTaddress = {address},
OPTedition = {edition},
month = {November},
OPTnote = {note},
OPTannote = {annote},
abstract = {Modern multi-robot service robotics applications often rely on coordination capabilities at multiple levels, from global (system-wide) task allocation and selection, to local (nearby) spatial coordination to avoid collisions. Often, the global methods are considered to be the heart of the multi-robot system, while local methods are tacked on to overcome intermittent, spatially-limited hindrances. We tackle this general assumption. Utilizing the alphabet soup simulator (simulating order picking, made famous by Kiva Systems), we experiment with a set of myopic, local methods for obstacle avoidance. We report on a series of experiments with a reinforcement-learning approach, using the effectiveness-index intrinsic reward, to allow robots to learn to select between methods to use when avoiding collisions. We show that allowing the learner to explore the space of parameterized methods results in significant improvements, even compared to the original methods provided by the simulator.},
  wwwnote = { },
}

Generated by bib2html.pl (written by Patrick Riley ) on Fri Apr 19, 2024 19:01:33