Hero Image
Master Thesis - Algorithm for Landing and Uprighting the MMX Rover with Gyroscopes

Abstract The Martian Moons eXploration (MMX) mission is planned by the Japanese space agency JAXA and will travel to Mars and its moons Phobos and Deimos. The main objective of the spacecraft is to collect a sample from Phobos. As a part of this Mission a small rover will be used to de-risk the main probe by letting it explore Phobos prior to touchdown. This rover is developed by CNES and the DLR and will be released from the spacecraft approximately 50m above ground where it will then fall onto Phobos. Af- ter some bounces on the surface the rover will autonomously unfold itself, deploy solar panels, and start recharging its batteries. The problem arises from the fact that after the fall and the bounces, the folded rover could be in any orientation and therefore it is not guaranteed that the rover is able to stand up and deploy its solar panels. The current implementation to tackle this problem is to just follow a pre-defined sequence of steps blindly. As this is a very critical single point of failure of this whole sub-mission it is desirable to increase the success rate of the deployment. It is very important to get the unfolding right the first time, as if the rover tries to unfold its solar panels without being in an upright position, the mechanism could get stuck and therefore the mission would fail. Once the solar panel deployment is done, the leg movement is restricted very drastically from a free rotation to operate within a specific angle range. This fact limits further recovery attempts and therefore it is unlikely to succeed the rover mission. From early prototypes it is shown that with the help of additional sensory feedback the success rate can be increased. The hardware for the mission is already finalized and therefore only the sensor suite present at the current time can be used to supplement the uprighting process. The onboard two axis MEMS gyroscopes are the best option to be used for this purpose. The hypothesis of this thesis is, that with the help of the built-in two-axis MEMS gyroscopes as feedback, a better result can be achieved. As the current implementation does nothing else than doing a blind guess and hoping for the best it should yield better results when some sort of informed guess is taken, and different conditions are checked to ensure the rover does not fall over. To do this, different approaches to incorporate gyroscopic sensors in this uprighting process are collected and implemented. Finally, all implementations are evaluated and compared to the cur- rent implementation to see if they are more successful. As access to real hardware is not feasible and the environmental conditions on Phobos with its low gravity are very special, it is not possible to do those experiments on real hardware and instead it is necessary to simulate everything. The whole system, including the sensors, need to be simulated in an effort to being able to create an ensemble of simulation runs, which is necessary to infer statistically relevant information and draw a conclusion on the suc- cess of the different methods. As this also includes the sensor, it is required to create a representative model of the sensor that is comparable to the real world counterpart. This model should model all necessary noise sources and uncertainties, but at the same time it should not over complicate the already extensive simulation to keep the simulation times low. In the end there will be a big comparison between all those approaches and the current implementation to see if the hypothesis is correct and where the strengths and weaknesses of the different methods are.

Hero Image
Bachelor Thesis - Rover Simulation in the Unreal Engine 4

Abstract The subject of this bachelor’s thesis is the development of a framework for simulating robots and rovers in the Unreal Engine 4 (UE4). The simulation will connect to the robotic framework Robot Operating System (ROS) and provides photo-realistic images for computer vision applications. The well-established hardware abstraction layer interface called ros_control was used to support actuator control in the simulation. To extract sensory information from the photo-realistic virtual environment the rosbridge package was used. The newly proposed framework is a proof of concept for a game-engine-based simu- lator with the Robot Operating System (ROS) to perform computer vision experiments without requiring specialized hardware. It is also possible to use this framework to gather a dataset to train neural networks on or to evaluate existing machine learning techniques. This framework will be evaluated on the example of the European Rover Challenge (ERC). The student team from the Scientific Workgroup for Rocketry and Spaceflight (WARR) already created a simulation, which will be used as reference. The results of this bachelor’s thesis show that the framework works as intended for controlling simple actuators and joints; however, the current physics simulation of Unreal Engine 4 (UE4) does not provide enough stability to simulate the proposed complex scenario without major artifacts. Moreover, the camera plugin used in this thesis influences the physics simulation negatively when parameters are changed to achieve real-time high-definition support. The framework does, however, provide a real alternative to already in-use state-of-the- art solutions, as it enables an easy-to-manipulate robotic simulator with a powerful graphics engine for photo-realistic simulations.