This page contains video material related to my research and teaching activities.

Cooperative SLAM using M-Space representation of linear features

Pioneer 3AT

The following video shows a multi-robot SLAM experiment with two Pioneer 3-AT robots, equipped with laser rangefinders. Initially, each robot explores a different part of the environment. When the two robot first meet, they measure their relative pose, exchange their current maps and merge them in a single, global map.


EKF-based SLAM with linear features

Pioneer 3AT The following videos refer to real experiments made with a Pioneer 3AT mobile robot, equipped with a SICK LMS laser range finder. The robot explores our Mobile Robotics Lab, previously located in an historical building dating back to the XV century (don't miss the fresco by Bernardino Fungai at the beginning of the movie!). The robot traveled some 170 m, at an average speed of o.4 m/s, taking 4 laser scans per second. The final map is made up of 66 lines, resulting in a final localization error of 0.07 m and 0.5 degrees. Further details can be found in:

The following video (based on simulation data) shows the effects of a loop closure when performing SLAM with linear-features. Thanks to the correlation among elements of the map, when the robot closes a loop its uncertainty shrinks and the estimation error of the lines reduces remarkably.


Collective motion for multi-agent systems

This video gallery shows a set of real experiments on collective circular motion with a team of nonholonomic robots built with the LEGO Mindstorms technology. In the first four videos, the objective of the team composed of four robots is to rotate about a static virtual reference beacon (approximately placed at the center of the picture) while at the same time avoiding collisions. The implemented control law is completely decentralized, and relies only on range and bearing measurements taken with respect to nearby vehicles (sensors with limited field of view are considered). In the fifth video, the same control law is tested for tracking purposes. A two vehicles team tracks a slowly moving virtual reference beacon (approximately along the vertical direction of the picture) rotating about it. In the sixth video the virtual reference beacon instantaneously "jumps" from the top of the picture to the bottom. As a result, the two vehicles point straight towards the new beacon position in linear motion and then end up rotating about it. Notice that this behavior is achieved with exactly the same control law (no switching is required). Further details can be found in:


Remote lab for multi-robot experiments

The first video shows how to perform multi-agent experiments with real mobile robots through the Automatic Control Telelab, developed at the University of Siena (freely accesible soon...). The second video shows some snapshots taken during real experiments.


[Home] - [Research] - [Publications] - [Videos] - [Teaching] - [CV]

Valid HTML 4.01 Strict Valid CSS!