Rapyuta: The RoboEarth Cloud Engine


It is our pleasure to announce the first public release of Rapyuta: The RoboEarth Cloud Engine. Rapyuta is an open source cloud robotics platform for robots. It implements a Platform-as-a-Service (PaaS) framework designed specifically for robotics applications.

Rapyuta helps robots to offload heavy computation by providing secured customizable computing environments in the cloud. Robots can start their own computational environment, launch any computational node uploaded by the developer, and communicate with the launched nodes using the WebSockets protocol.


The above figure shows a simplified overview of the Rapyuta framework: Each robot connected to Rapyuta has a secured computing environment (rectangular boxes) giving them the ability to move their heavy computation into the cloud. Computing environments have a high bandwidth connection to the RoboEarth knowledge repository (stacked circular disks). This allows robots to process data directly inside the computational environment in the cloud without the need for downloading and local processing. Furthermore, computing environments are tightly interconnected with each other. This paves the way for the deployment of robotic teams.

The name Rapyuta is inspired from the movie Tenku no Shiro Rapyuta (English title: Castle in the Sky) by Hayao Miyazaki, where Rapyuta is the castle in the sky inhabited by robots.

To learn more and contribute to this open-source effort, visit: http://rapyuta.org/.

RoboEarth's First Open Source Release

We are happy to announce RoboEarth's first open source software release. This release allows you to create 3D object models and upload them to RoboEarth. It also allows you to download any model stored in RoboEarth and detect the described object using a Kinect or a webcam.

If you are familiar with ROS, creating and using object models is easy. As shown in the video tutorial above, it uses three main packages:

  • RoboEarth's re_object_recorder package allows you to create your own 3D object model using Microsoft's Kinect sensor. By recording and merging point clouds gathered from different angles around the object, a detailed model is created, which may be shared with the world by uploading it to RoboEarth.
  • RoboEarth's re_kinect_object_detector package allows you to detect models you download from RoboEarth using a Kinect.
  • Alternatively, you may also use RoboEarth's re_vision package to detect objects using a common RGB camera.

A complete overview of the process can be found at http://www.ros.org/wiki/roboearth
RoboEarth aims at creating an object database including semantic descriptors. Semantic descriptors allow robots to not only detect objects, but reason about them. For example, if a robot is asked to serve a drink, semantic object descriptors allow the robot to determine if all required objects are available or if an additional object model is missing, and if a missing model is available via RoboEarth. You can help us with that process by supplying meaningful names and descriptions for the objects you create.

We are looking forward to your feedback in the comments below or at info at roboearth.org.

RoboEarth - A World Wide Web for Robots

The latest issue of the IEEE Robotics and Automation Magazine (RAM) is dedicated to building a WWW for robots.

Cover of the IEEE RA Magazine

(C) IEEE Robotics and Automation Magazine 2011

Our contribution entitled RoboEarth - A World Wide Web for Robots gives an overview of RoboEarth: Its overall architecture, all key components, the available interfaces and an in-depth look at the topics the RoboEarth team is currently working on.
The paper also summarizes the work done so far and describes RoboEarth's first three demonstrators.

Cover of the RoboEarth journal paper

(C) FOTOSEARCH, IEEE Robotics and Automation Magazine 2011

Other contributions to this Special Issue highlight research that is intimately connected to RoboEarth's vision of creating an Internet for robots:

  • Willow Garage's Matai Ciocarlie et al. describes a 3D object database that contains grasp points, which paves the way for linking a first simple Action Recipe to grasp and pick up objects.
  • Tenorth et al. propose an approach that allows robots to make use of information from the Web, such as instructions to perform everyday tasks or descriptions of properties and the appearance of objects. The authors propose techniques to translate the information from human-readable form into representations the robot can use.
  • A contribution by Daniel Roggen et al. discusses methods for the automatic detection of actions in the wearable computing community, which provide valuable hints for RoboEarth's Action and Situation Recognition and Labeling component.
  • Mozos et al. address the problem of exploiting the structure in today's designed workplace interiors as an example for how future object model Web databases can be used by service robots to add semantics to their sensors' readings and to build models of their environment.
  • Blake et al. introduce both developmental and operational paradigms, whereas robots can be outfitted with Web-oriented software interfaces that give them access to universally standard Web resources. A case study is performed to investigate and demonstrate the conversion of traditional robotic data exchange for communicating with web services.

It is exciting to see so many common efforts being made in the robotics community, and we hope that this Special Issue and our contribution will inspire many more researchers to work towards making the WWW for robots a reality.

RoboEarth in motion: Videos of the first three demonstrators

To catch a glimpse of what RoboEarth is all about, watch the following videos showing three demonstrators that have been developed in the last months.

  • During RoboEarth's first internal workshop a demonstrator was built to showcase how sharing environmental information can be beneficial to robots, even if they use different hardware and/or software setups:
    • For the second demonstrator a humanoid robot was asked to serve a drink to a patient in a mock-up hospital room. By using RoboEarth the robot was able to achieve its task in spite of having only basic capabilities for movement, perception and mapping. It downloaded an Action Recipe from the RoboEarth database, which provided a machine understandable semantic description of the action. Using logical reasoning, the robot could identify missing components, such as a map of the room and models for all involved objects, and download them from RoboEarth:

    • The third demonstrator shows the feasibility of sharing articulation models for doors and drawers through RoboEarth.

    Together, these three demonstrators take a first step towards showing that RoboEarth is feasible and useful. In particular, the first demonstrator illustrated how sharing data between multiple robots can lead to faster learning. The second demonstrator evidences that by taking prior knowledge into account the speed of performing complex tasks, such as serving a drink in the semi-structured environment of a hospital, can be greatly increased. The third demonstrator showed how robots can create knowledge that is useful across different robot platforms. Overall, using RoboEarth allows robots to benefit from the experience of other robots.