Published July 2020 | Version Submitted
Book Section - Chapter Open

Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data

Abstract

In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV's absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing.

Additional Information

© 2020 AACC. F. Baldini is supported in part by Darpa PAI grant HR0011-18-9-0035. A. Anandkumar is supported in part by Darpa PAI grant HR0011-18-9-0035, Bren Endowed Chair, Microsoft Faculty Fellowship, Google Faculty Award, Adobe Grant.

Attached Files

Submitted - 1912.04527.pdf

Files

1912.04527.pdf

Files (5.8 MB)

Name Size Download all
md5:360ca14625d649c4608db7b67dcf4257
5.8 MB Preview Download

Additional details

Identifiers

Eprint ID
100568
Resolver ID
CaltechAUTHORS:20200108-154918519

Related works

Funding

Defense Advanced Research Projects Agency (DARPA)
HR0011-18-9-0035
Bren Professor of Computing and Mathematical Sciences
Microsoft Faculty Fellowship
Google
Adobe

Dates

Created
2020-01-08
Created from EPrint's datestamp field
Updated
2021-11-16
Created from EPrint's last_modified field

Caltech Custom Metadata

Caltech groups
Division of Biology and Biological Engineering (BBE)