The following video shows a multi-robot SLAM experiment with two
Pioneer 3-AT robots, equipped with laser rangefinders. Initially, each
robot explores a different part of the environment. When the two robot
first meet, they measure their relative pose, exchange their current
maps and merge them in a single, global map.
- D. Benedettelli, A. Garulli, A. Giannitrapani. Cooperative SLAM using M-Space representation
of linear features. Robotics and Autonomous Systems, vol.
60, no. 10, pp. 1267-1278, October 2012 (Regular paper).
Benedettelli, A. Garulli, A. Giannitrapani. Multi-Robot SLAM using M-Space
feature representation. In Proceedings of the
IEEE Conference on Decision and Control, pp. 3826-3831,
Atlanta (USA), December 15-17, 2010.
The following videos refer to real experiments made with a Pioneer 3AT
equipped with a SICK LMS laser range finder. The robot explores our
, previously located in an historical building dating
back to the XV
century (don't miss the fresco by Bernardino Fungai
beginning of the movie!). The robot traveled some 170 m, at an average
speed of o.4 m/s, taking 4 laser scans per second. The final map is
made up of 66 lines, resulting in a final localization error of 0.07 m
and 0.5 degrees. Further details can be found in:
- A. Garulli, A. Giannitrapani, A.
Rossi, A. Vicino. Simultaneous
localization and map building using linear features. In Proceedings
of the 2nd European
Conference on Mobile Robots, Ancona (Italy), September
- A. Garulli,
A. Giannitrapani, A. Rossi, A. Vicino. Mobile robot SLAM
environment representation. In Proceedings of the
Conference on Decision and Control, pp.
(Spain), December 12-15, 2005.
The following video (based on simulation data) shows the effects of a
loop closure when performing SLAM with linear-features. Thanks to the
correlation among elements of the map, when the robot closes a loop its
uncertainty shrinks and the estimation error of the lines reduces
This video gallery shows a
set of real experiments on collective circular motion with a team of
nonholonomic robots built with the LEGO Mindstorms technology. In the
first four videos
, the objective of the team composed of four
is to rotate about a static virtual reference beacon (approximately
placed at the center of the picture) while at the same time avoiding
collisions. The implemented control law is completely decentralized,
and relies only on range and bearing measurements taken with respect
to nearby vehicles (sensors with limited field of view are
considered). In the fifth video
, the same control law is tested
for tracking purposes. A two vehicles team tracks a slowly moving
virtual reference beacon (approximately along the vertical direction of
the picture) rotating about it. In the sixth video
reference beacon instantaneously "jumps" from the top of the picture to
the bottom. As a result, the two vehicles point straight towards the
new beacon position in linear motion and then end up rotating about it.
Notice that this behavior is achieved with exactly the same control law
(no switching is required). Further details can be found in:
- N. Ceccarelli, M. Di Marco, A.
Garulli, A. Giannitrapani. Collective
circular motion of
multi-vehicle systems. Automatica, Vol. 44, no. 12,
pp. 3025-3035, December
2008 (Regular paper).
Benedettelli, N. Ceccarelli, A. Garulli, A. Giannitrapani. Experimental
validation of collective circular motion for nonholonomic multi-vehicle
systems. Robotics and Autonomous Systems, Vol. 58,
no. 8, pp. 1028-1036, August 2010.
The first video shows how to perform multi-agent experiments with real
mobile robots through the Automatic
, developed at the University of Siena (freely
accesible soon...). The second video shows some snapshots taken
during real experiments.