Autonomous agile flight brings up fundamental challenges in robotics, such as coping with unreliable state estimation, reacting optimally to dynamically changing environments, and coupling perception and action in real time under severe resource constraints. In this work, we consider these challenges in the context of autonomous, vision-based drone racing in dynamic environments. Our approach combines a convolutional neural network (CNN) with a state-of-the-art path-planning and control system. The CNN directly maps raw images into a robust representation in the form of a waypoint and desired speed. This information is then used by the planner to generate a short, minimum-jerk trajectory segment and corresponding motor commands to reach the desired goal. We demonstrate our method in autonomous agile flight scenarios, in which a vision-based quadrotor traverses drone-racing tracks with possibly moving gates. Our method
does not require any explicit map of the environment and runs fully onboard. We extensively test the precision and robustness of the approach in simulation and in the physical world. We also evaluate our method against state-of-the-art navigation approaches and professional human drone pilots.
Reference:
E. Kaufmann, A.Loquercio, R. Ranftl, A. Dosovitskiy, V. Koltun, D. Scaramuzza
Deep Drone Racing: Learning Agile Flight in Dynamic Environments
PDF: https://arxiv.org/abs/1806.08548
Our research page on deep learning:
http://rpg.ifi.uzh.ch/research_learning.html
Affiliations:
E. Kaufmann, A. Loquercio and D. Scaramuzza are with the Robotics and Perception Group, Dep. of Informatics, University of Zurich, and Dep. of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland
http://rpg.ifi.uzh.ch/
R. Ranftl, A. Dosovitskiy and V. Koltun are with Intel Labs
Im a professional drone racer and I challenge your drones to a race. Let’s goooooo!!!!
Get’em Nurk!!
$ on Nurk!
receding horizon trajectory is generated with motion primitive ?