CaltechAUTHORS
  A Caltech Library Service

A seasonally invariant deep transform for visual terrain-relative navigation

Fragoso, Anthony T. and Lee, Connor T. and McCoy, Austin S. and Chung, Soon-Jo (2021) A seasonally invariant deep transform for visual terrain-relative navigation. Science Robotics, 6 (55). Art. No. eabf3320. ISSN 2470-9476. doi:10.1126/scirobotics.abf3320. https://resolver.caltech.edu/CaltechAUTHORS:20210624-195102852

[img] PDF (Fig. S1) - Supplemental Material
See Usage Policy.

302kB
[img] Video (MPEG) (Movie S1) - Supplemental Material
See Usage Policy.

51MB
[img] Video (MPEG) (Movie S2) - Supplemental Material
See Usage Policy.

9MB
[img] Video (MPEG) (Movie S3) - Supplemental Material
See Usage Policy.

7MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20210624-195102852

Abstract

Visual terrain-relative navigation (VTRN) is a localization method based on registering a source image taken from a robotic vehicle against a georeferenced target image. With high-resolution imagery databases of Earth and other planets now available, VTRN offers accurate, drift-free navigation for air and space robots even in the absence of external positioning signals. Despite its potential for high accuracy, however, VTRN remains extremely fragile to common and predictable seasonal effects, such as lighting, vegetation changes, and snow cover. Engineered registration algorithms are mature and have provable geometric advantages but cannot accommodate the content changes caused by seasonal effects and have poor matching skill. Approaches based on deep learning can accommodate image content changes but produce opaque position estimates that either lack an interpretable uncertainty or require tedious human annotation. In this work, we address these issues with targeted use of deep learning within an image transform architecture, which converts seasonal imagery to a stable, invariant domain that can be used by conventional algorithms without modification. Our transform preserves the geometric structure and uncertainty estimates of legacy approaches and demonstrates superior performance under extreme seasonal changes while also being easy to train and highly generalizable. We show that classical registration methods perform exceptionally well for robotic visual navigation when stabilized with the proposed architecture and are able to consistently anticipate reliable imagery. Gross mismatches were nearly eliminated in challenging and realistic visual navigation tasks that also included topographic and perspective effects.


Item Type:Article
Related URLs:
URLURL TypeDescription
https://doi.org/10.1126/scirobotics.abf3320DOIArticle
https://robotics.sciencemag.org/content/suppl/2021/06/21/6.55.eabf3320.DC1PublisherSupplementary Materials
ORCID:
AuthorORCID
Fragoso, Anthony T.0000-0002-5805-9668
Lee, Connor T.0000-0002-5008-4092
McCoy, Austin S.0000-0003-3777-4475
Chung, Soon-Jo0000-0002-6657-3907
Additional Information:© 2021 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. Submitted 19 October 2020; Accepted 1 June 2021; Published 23 June 2021. We thank P. Tokumaru. This project was in part funded by the Boeing Company with R. K. Li as Boeing Project Manager. C.T.L. acknowledges the National Science Foundation Graduate Research Fellowship under grant no. DGE1745301. A.S.M. was in part supported by Caltech’s Summer Undergraduate Research Fellowship (SURF). Author contributions: A.T.F. developed the deep transform, implemented the software, designed the test and training datasets, and designed the experiments. C.T.L. contributed to architecture development, implemented the software, and conducted the experiments. A.S.M. contributed to the VTRN demonstration and conducted the experiments. S.-J. C. contributed to development of the VTRN concept and robotics applications and directed the research activities. All authors participated in the preparation of the manuscript. Competing interests: A.T.F., C.T.L., and S.-J. C. are inventors on a pending patent submitted by the California Institute of Technology that covers the material described herein. The authors declare that they have no other competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper or the Supplementary Materials. Imagery used and the U-Net network used are publicly available at the relevant cited sources.
Group:GALCIT
Funders:
Funding AgencyGrant Number
Boeing CompanyUNSPECIFIED
NSF Graduate Research FellowshipDGE-1745301
Caltech Summer Undergraduate Research Fellowship (SURF)UNSPECIFIED
Issue or Number:55
DOI:10.1126/scirobotics.abf3320
Record Number:CaltechAUTHORS:20210624-195102852
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20210624-195102852
Official Citation:A. T. Fragoso, C. T. Lee, A. S. McCoy, S.-J. Chung, A seasonally invariant deep transform for visual terrain-relative navigation. Sci. Robot. 6, eabf3320 (2021)
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:109566
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:24 Jun 2021 20:17
Last Modified:24 Jun 2021 20:17

Repository Staff Only: item control page