Media from the First-MM project

Fetch and Carry application

This video shows how the individual skills developed in the First-MM project can be integrated in a useful application. The skills are orchestrated by a small script, using the actionlib interfaces of the skills for execution and error handling. The robot operates in an area with five shelfes (only two are visible in the video), which he navigates to and checks for bottles. For all detected bottles, it determines grasp points, evaluates their feasability for the task and plans a collsion-free arm motion for going to the grasping pose and back to the bottle case. It then places the bottles in the case and continues its search for more bottles.

High precision robot positioning

The video shows a KUKA omniRob performing a long-term evaluation of our laser range finder-based fine positioning system. The robot repeatedly travels between three goal locations at which reference scans have been pre-taught. Upon arrival at the goal location the robot corrects its position comparing the current and pre-taught readings of the laser range finders, yielding an average positioning error of less than 0.01 m and 1 deg.

Interactive estimation of door dynamics

This Video shows the OmniRob swinging a door open. The door's geometry is assumed to be known, but the friction and moment of inertia is learned during the manipulation.

Probabilistic 3D mapping with RGBDSLAM and OctoMap

This video demonstrates efficient 3D mapping using the RGBDSLAM and the OctoMap library.

ORCA robotics simulator

A video demonstation for the ORCA robotics simulator environment, from the Foundation for Research and Technology, ICS. The ORCA Simulator project was initiated in 2006 and developed up to 2012 by Harris Baltzakis. It is now maintained by members of the CVRL group (head of group: Panos Trahanias). It is available as a Windows only platform binary application (see the software page. This video and simulation were developed by E. Hourdakis and G. Chliveros.

mh3dot: 3d object tracking

Tracker developed at FORTH (top: with multi hypotheses; bottom: without multi hypotheses) A video demonstation for the 6 degrees of freedom object tracker (MH3DOT) of known model (Foundation for Research and Technology, ICS). It is maintained by members of the CVRL group (head of group: Panos Trahanias).

LWR catching bottles

This video shows the KUKA LWR robot catching a falling bottle using a multiple-attractor dynamical system that was learned a-priori from human demonstrations. As the object falls and rotates the robot switches between the two attractors in real-time by querying the learned model.

LWR grasping a pitcher

The video shows a KUKA LWR trying to grasp a pitcher as it is moved around in the workspace. The robot adapts its motion according to a model learned from human demonstrations. The model is in the form of a multiple-attractor dynamical system with attractors at the desired grasping points.

LWR searching for a an object on a table

This video shows the KUKA LWR searching for an object on a table without any visual information. The robot has prior knowledge of the world. The search strategies from human demonstrations where a mapping belief to actions is learnt.

A Practical Approach to Sensor-based Motion Generation Under Uncertainty for Mobile Manipulation

Mobile manipulation targets applications in dynamic and unstructured environments. Motion generation methods suitable for these applications must account for end- effector task constraints, must reason about environment uncertainty, i.e. the fact that the exact state of the dynamic environment cannot be known to the robot, and should do so only using their on-board sensors. We present the Expected-Shortest-Path Elastic Roadmap (ESPER) planner as a motion generation method suitable for mobile manipulation. It integrates task-constrained, whole-body, reactive motion generation in high-dimensional configuration space with reasoning about uncertainty using a time-dependent probabilistic model. We generate task-consistent motion in uncertain environments on a real-world mobile manipulator only relying on on-board sensors.

Motion planning for manipulators on an OctoMap

The Exploring/Exploiting Tree (EET) motion planner plans a collision free trajectory on an Octomap 3D occupancy grid constructed from real sensor data. The goal of the planning task is a desired end-effector pose. The EET generates a valid and reasonably short trajectory in all eight trials. The visualization is slower than real time - the pure planning time never exceeded five seconds. Uses the RobLib (roblib.sf.net) and octomap (octomap.github.io) libraries.