The video presents Robust Depth, a deep learning-based model to generate depth images from monocular images.
PhD student: Kieran SaundersOctober 2023
The video shows the output of our Graph Neural Network estimating the level of social compliance while the robot is moving in a simulated scenario.
September 2019
As the video on the right-hand side, this video shows our robot Shelly in a fetch-and-deliver task. The two arms previously found in Shelly have been replaced by a single arm with zero backslash and more precision. It must be noted that in this case the video is played in real time: without backslash in the joints visual servoing is unnecessary.
October 2016
This video shows how our robot Shelly fetches an object from a table and delivers it to a human. It was recorded for the oral defense of the Ph.D. thesis of our former student and colleague Luis V. Calderita. One of the biggest difficulties (in addition to the fact that, thanks to AGM, the robot is capable of maintaining a model of the environment, the objects and humans in it) was to achieve this task without a fine kinematic calibration, using visual feedback instead. Although calibrating the robot would make it perform the task faster, avoiding the need of a proper calibration allowed us to demonstrate how robust can a robot be using visual servoing.
February 2016
This is a demonstration of cognitive subtraction using AGM and AGGL. We use cognitive subtraction to actively model the environment in a structured way. First modeling the room in which the robot is located, then the objects inside. The video contains comments about the steps that the simulated robot takes to achieve the task.
November 2013
This video shows our robot Gualzru finding a coffee mug for us. As an intermediate step it also models the room in which it is located and finds a table. This video was recorded as part of my Ph.D. thesis.
The robot uses AGM and RoboComp.
June 2013