Best Cognitive Robotics Paper Award at ICRA 2012

We gladly announce that the conference paper submission called 'The RoboEarth language: Representing and Exchanging Knowledge about Actions, Objects, and Environments' (Moritz Tenorth, Alexander Perzylo, Reinhard Lafrenz and Michael Beetz) has won the Best Cognitive Robotics Paper Award at ICRA 2012.
The paper covers the design of the semantic RoboEarth language and how it is used to describe and reason about tasks, objects and environments in a way that allows to share knowledge between different robots. Descriptions of tasks include information about required physical attributes and software components, which is being matched against a robot's capabilities entailed in its semantic self-model. This allows to infer whether a robot is capable of performing a certain task and if not, how it might be enabled by downloading additional information from RoboEarth.

RoboEarth at CogSys 2012

The 5th International Conference on Cognitive Systems was held in Vienna, Austria, on February 23 - 24, 2012.
The conference aimed at presenting the state-of-the-art in cognitive systems and robotics. It showed European research efforts being made in this field and provided an opportunity for open discussions.
Part of the lively interaction was a talk given by Heico Sandee about the RoboEarth project. It covers how robots can exchange knowledge through RoboEarth and how to determine whether the knowledge might be useful for a specific robot or not.

In order to watch the talk in full length (~18 min.) please follow the link below.

Third Internal RoboEarth Workshop (Update)

Update (Sep 11, 2012):
Finally, we compiled a video of the demonstrator we created during the workshop including additional explanations of what is going on behind the visible actions of the robots:

The third internal RoboEarth workshop took place  at the Technical University of Munich from February 8th to 12th, 2012, and was directly followed by RoboEarth's second Annual Review meeting on February 13th, 2012.

The RoboEarth Team

The RoboEarth demonstrator developed during the week-long workshop showed how two robots with different hardware and in different locations could use RoboEarth to share knowledge.

First, a PR2 robot in downtown Munich was ordered to serve a drink to a patient, who was resting in a bed in a mock-up hospital room. As a related semantic task description was available in the RoboEarth database, the PR2 could download this information and infer whether its capabilities comply with the task's requirements and what other knowledge it was missing to execute the task, e.g. object detection models and environment maps. It successfully checked the availability of the missing components on RoboEarth, downloaded them and as a result could start executing the task. As the drink was stored inside of a cabinet behind a closed door, the PR2 had to learn the articulation model for that door. After completing the learning process, the PR2 annotated the object model of the cabinet with the learned articulation model for the door and updated it on the RoboEarth database.

Then an Amigo robot in a similar (but not identical) hospital room environment in Garching close to Munich was given the same command of serving a drink. The robot could download needed knowledge from RoboEarth like the PR2 did. This time the articulation model was included, so that during the execution of the task Amigo didn't have to learn it by itself. Amigo was able to grasp the handle of the door and open it right away.

Amigo opening a door

This demonstration showed what a shared knowledge base like RoboEarth including its reasoning services can add to the development of robots:  Robots were able to navigate, recognize objects and perform complex manipulation tasks without being explicitly pre-programmed for these tasks beforehand.

To achieve this goal, all of the involved PhD students and several professors gathered in Munich to work on tomorrow's cloud robotics solutions. The week was characterized by a large amount of work and a limited amount of sleep - and a joint evening at a Bavarian restaurant.

Some RoboEarth members having dinner

European Robotics Week 2011 (Update)

Update (Jan 05, 2012):
More than 100 people joined the introduction to RoboEarth and the interactive workshops. They created and detected their first 3D object models using the RoboEarth platform. We want to thank everyone who helped to organize the successfull event as well as all participants who showed their interest.

euRobotics Week

RoboEarth will present itself as part of the European Robotics Week from November 28th - December 4th, 2011.
Therefore the RoboEarth team will set up a live webcast on Friday, December 2nd, 2011, starting from 15.00 (CET).

Dr. Oliver Zweigle is going to present a brief introduction to the concepts of RoboEarth. Subsequently, an interactive workshop will be held. The aim for the workshop is to let anyone interested try out the RoboEarth software to build 3D object models themselves and use them to detect the described objects.

The workshop's prerequisites and details on how to register can be found at webcast. Registration will be open until November 20th, 2011. The webcast itself will also be made available through this website.

RoboEarth at IROS 2011

Members of the RoboEarth team contributed seven papers to the IROS'11 conference, which took place in San Francisco (USA) from September 25-30th. In addition, RoboEarth supported a workshop on Knowledge Representation for Autonomous Robots.

During the workshop Jos Elfring gave an introduction to RoboEarth's approach to world modelling. It uses a multiple hypothesis filter (MHF) to keep track of objects over time and introduces techniques to improve the probabilistic models by taking prior knowledge about objects into account, e.g. object dynamics, expected locations, relations between object classes and detector characteristics. For more details on this topic take a look at the corresponding paper, Knowledge-Driven World Modeling.

Other papers presented during the regular paper sessions were:

RoboEarth's First Open Source Release

We are happy to announce RoboEarth's first open source software release. This release allows you to create 3D object models and upload them to RoboEarth. It also allows you to download any model stored in RoboEarth and detect the described object using a Kinect or a webcam.

If you are familiar with ROS, creating and using object models is easy. As shown in the video tutorial above, it uses three main packages:

  • RoboEarth's re_object_recorder package allows you to create your own 3D object model using Microsoft's Kinect sensor. By recording and merging point clouds gathered from different angles around the object, a detailed model is created, which may be shared with the world by uploading it to RoboEarth.
  • RoboEarth's re_kinect_object_detector package allows you to detect models you download from RoboEarth using a Kinect.
  • Alternatively, you may also use RoboEarth's re_vision package to detect objects using a common RGB camera.

A complete overview of the process can be found at
RoboEarth aims at creating an object database including semantic descriptors. Semantic descriptors allow robots to not only detect objects, but reason about them. For example, if a robot is asked to serve a drink, semantic object descriptors allow the robot to determine if all required objects are available or if an additional object model is missing, and if a missing model is available via RoboEarth. You can help us with that process by supplying meaningful names and descriptions for the objects you create.

We are looking forward to your feedback in the comments below or at info at

RoboEarth - A World Wide Web for Robots

The latest issue of the IEEE Robotics and Automation Magazine (RAM) is dedicated to building a WWW for robots.

Cover of the IEEE RA Magazine

(C) IEEE Robotics and Automation Magazine 2011

Our contribution entitled RoboEarth - A World Wide Web for Robots gives an overview of RoboEarth: Its overall architecture, all key components, the available interfaces and an in-depth look at the topics the RoboEarth team is currently working on.
The paper also summarizes the work done so far and describes RoboEarth's first three demonstrators.

Cover of the RoboEarth journal paper

(C) FOTOSEARCH, IEEE Robotics and Automation Magazine 2011

Other contributions to this Special Issue highlight research that is intimately connected to RoboEarth's vision of creating an Internet for robots:

  • Willow Garage's Matai Ciocarlie et al. describes a 3D object database that contains grasp points, which paves the way for linking a first simple Action Recipe to grasp and pick up objects.
  • Tenorth et al. propose an approach that allows robots to make use of information from the Web, such as instructions to perform everyday tasks or descriptions of properties and the appearance of objects. The authors propose techniques to translate the information from human-readable form into representations the robot can use.
  • A contribution by Daniel Roggen et al. discusses methods for the automatic detection of actions in the wearable computing community, which provide valuable hints for RoboEarth's Action and Situation Recognition and Labeling component.
  • Mozos et al. address the problem of exploiting the structure in today's designed workplace interiors as an example for how future object model Web databases can be used by service robots to add semantics to their sensors' readings and to build models of their environment.
  • Blake et al. introduce both developmental and operational paradigms, whereas robots can be outfitted with Web-oriented software interfaces that give them access to universally standard Web resources. A case study is performed to investigate and demonstrate the conversion of traditional robotic data exchange for communicating with web services.

It is exciting to see so many common efforts being made in the robotics community, and we hope that this Special Issue and our contribution will inspire many more researchers to work towards making the WWW for robots a reality.

RoboEarth in motion: Videos of the first three demonstrators

To catch a glimpse of what RoboEarth is all about, watch the following videos showing three demonstrators that have been developed in the last months.

  • During RoboEarth's first internal workshop a demonstrator was built to showcase how sharing environmental information can be beneficial to robots, even if they use different hardware and/or software setups:
    • For the second demonstrator a humanoid robot was asked to serve a drink to a patient in a mock-up hospital room. By using RoboEarth the robot was able to achieve its task in spite of having only basic capabilities for movement, perception and mapping. It downloaded an Action Recipe from the RoboEarth database, which provided a machine understandable semantic description of the action. Using logical reasoning, the robot could identify missing components, such as a map of the room and models for all involved objects, and download them from RoboEarth:

    • The third demonstrator shows the feasibility of sharing articulation models for doors and drawers through RoboEarth.

    Together, these three demonstrators take a first step towards showing that RoboEarth is feasible and useful. In particular, the first demonstrator illustrated how sharing data between multiple robots can lead to faster learning. The second demonstrator evidences that by taking prior knowledge into account the speed of performing complex tasks, such as serving a drink in the semi-structured environment of a hospital, can be greatly increased. The third demonstrator showed how robots can create knowledge that is useful across different robot platforms. Overall, using RoboEarth allows robots to benefit from the experience of other robots.

    RoboEarth at ICRA 2011

    The RoboEarth team made the effort to build a nice demo for ICRA2011. The project's Object Recorder was presented at the RoboEarth booth, showing how to build 3d models of objects and share them through RoboEarth with ease. The demo attracted a lot of interest and feedback, as visitors could use the system themselves in order to build models of arbitrary objects and make them available worldwide in an instant.

    A visitor using the object recorder system

    For this purpose a low cost sensor was used: the Microsoft Kinect.

    Garden gnome and its 3d model

    The object under investigation has to be placed on a marker table, that holds the following pattern: marker_template. It is important to print it out the same size (marker edge length of 80.0 mm). This AR markers are used to register the point cloud for the model from different views without any expensive computations. The software used for the demo is part of the RoboEarth open source software initiative and will be released among other software components in July 2011. The shared object models can be downloaded and used by robots and other intelligent systems to recognize objects, they didn't know about before.