Videos & Image Gallery
Experimental Environments
There are two sets of movies and media in this gallery. The first provides rational swarms in action research. The second documents vision-based collective motion.
Rational swarms movies
We have applied the rational swarms multi-agent reinforcement learning method in many different environments, both with physical robots and in simulations. Repeatedly, we find that different robots learn to respond differently to collisions and inter-agent conflicts that require coordination. They become heterogeneous in their decision-making.
Rational foraging in Alphabet Soup material-handling simulator1
- The material handling simulator Alphabet Soup[^A] was developed by Kiva Systems (which was acquired by Amazon, and became the basis for Amazon Robotics). Here orders (words) are continuously put into a queue. The shelves (purpule circles) contain letters. Robots (orange rectangles) are assigned by a task-allocation algorithm to fetch letters, and bring them to the order-completion stations (green circles, right side). Occasionally, new letters have to be brought in from the supply station (blue circles, left side). The task is to maximize the number of completed words.
While the assignment of which robot goes to which letter is centrally given, each robot plots and manages its own path. Robots may therefore collide and must handle collisions and navigate around obstacles.
The experiments described in the paper show that by using the generalized (aligned—see paper) EI reward and continuous-time reinforcement learning algorithm, different robots learn to use different collision-avoidance methods. This results in a heterogeneous swarm (different robots respond differently to same state), in which the robots complete many more words in a given time, than using default methods.
Rational repeated search (foraging variant) with Krembot robots1
In this task, Krembot robots carry out a repeated search task, which is a variant of foraging. Due to limitations of the robots, we could not get them to push pucks (collected items) reliably. We therefore fixed the pucks in place (wooden circles), while the robots had to repeatedly find them, and then find their nest (lighted, bottom left corner). The robots do not know where the pucks are, and cannot localize, so they have to search from scratch every time. They also do not know where the nest is, and have to search for it, by looking for areas that are highly lighted (the nest is lighted in green, but it is difficult to see in the movie).
The color coding on the robots themselves. As the robots seek pucks, they light up in red. When they determine that they have found a puck, they change their light to blue, and start searching for the nest. When they reach it, they switch to red again, and go back to searching for pucks.
Here again, the application the generalized aligned EI reward leads to significant and clear improvement in the number of items found, compared to any homogeneous swarm using a single method.
Rational foraging with Sony AIBO robots2
Intermittent locomotion in swarm collective motion
The following movies were recorded using the Nymbot simulator, which uses ARGoS3 as its simulation infrastructure.
Here is what an avoid-attract model looks like, when using robust vision estimation (AA-V model, described in the article3). The first movie shows it functioning with 0 faulty robots (all the robots move as they should). There are 40 robots in all. Within 25 seconds, the first movie shows the robots beginning to move together (flocking).
The second movie shows a different run, using the same model when 20% of the robots are stuck in place (8 robots out of 40; they are marked by a gray label, rather than green). Here, within 25 seconds, the swarm robots remain in place.
Now, let us see the intermittent model (pause and go, AAPG-V model4,3). Here again with 0 faulty robots:
… and now with 8 faulty robots (20%). The intermittent motion slows down the swarm, but allows the healthy robots to detect the fact that some of their neighbours are stuck in place. They avoid them.
-
Yinon Douchan, Ran Wolf and Gal A. Kaminka. Swarms Can be Rational in Proceedings of the International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2019. ↩︎ ↩︎
-
Gal A. Kaminka, Dan Erusalimchik, and Sarit Kraus. Adaptive Multi-Robot Coordination: A Game-Theoretic Perspective, in Proceedings of IEEE International Conference on Robotics and Automation (ICRA), 2010. ↩︎
-
Shefi P., Ayali A. and Kaminka G.A. (2025) Bugs with features: resilient collective motion inspired by nature. Under Review. ↩︎ ↩︎
-
Peleg Shefi, Amir Ayali, and Gal A. Kaminka. Pausing Makes Perfect: Intermittent Pauses for Resilient Swarming. In International Symposium on Distributed Autonomous Robotic Systems (DARS), Springer, 2024. ↩︎