CaltechAUTHORS
  A Caltech Library Service

DGNSS-Vision Integration for Robust and Accurate Relative Spacecraft Navigation

Capuano, V. and Harvard, A. and Lin, Y. and Chung, S. J. (2019) DGNSS-Vision Integration for Robust and Accurate Relative Spacecraft Navigation. In: ION GNSS+ 2019, 16-20 September 2019, Miami, FL. https://resolver.caltech.edu/CaltechAUTHORS:20191007-132043345

[img] PDF - Accepted Version
See Usage Policy.

2223Kb

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20191007-132043345

Abstract

Relative spacecraft navigation based on Global Navigation Satellite System (GNSS) has been already successfully performed in low earth orbit (LEO). Very high accuracy, of the order of the millimeter, has been achieved in postprocessing using carrier phase differential GNSS (CDGNSS) and recovering the integer number of wavelength (Ambiguity) between the GNSS transmitters and the receiver. However the performance achievable on-board, in real time, above LEO and the GNSS constellation would be significantly lower due to limited computational resources, weaker signals, and worse geometric dilution of precision (GDOP). At the same time, monocular vision provides lower accuracy than CDGNSS when there is significant spacecraft separation, and it becomes even lower for larger baselines and wider field of views (FOVs). In order to increase the robustness, continuity, and accuracy of a real-time on-board GNSS-based relative navigation solution in a GNSS degraded environment such as Geosynchronous and High Earth Orbits, we propose a novel navigation architecture based on a tight fusion of carrier phase GNSS observations and monocular vision-based measurements, which enables fast autonomous relative pose estimation of cooperative spacecraft also in case of high GDOP and low GNSS visibility, where the GNSS signals are degraded, weak, or cannot be tracked continuously. In this paper we describe the architecture and implementation of a multi-sensor navigation solution and validate the proposed method in simulation. We use a dataset of images synthetically generated according to a chaser/target relative motion in Geostationary Earth Orbit (GEO) and realistic carrier phase and code-based GNSS observations simulated at the receiver position in the same orbits. We demonstrate that our fusion solution provides higher accuracy, higher robustness, and faster ambiguity resolution in case of degraded GNSS signal conditions, even when using high FOV cameras.


Item Type:Conference or Workshop Item (Paper)
Related URLs:
URLURL TypeDescription
https://www.ion.org/gnss/abstracts.cfm?paperID=7836OrganizationAbstract
ORCID:
AuthorORCID
Capuano, V.0000-0002-6886-5719
Chung, S. J.0000-0002-6657-3907
Additional Information:The first author was supported by the Swiss National Science Foundation (SNSF). This work was also supported in part by the Jet Propulsion Laboratory (JPL). Government sponsorship is acknowledged. The authors thank F. Y. Hadaegh, A. Rahmani, and S. R. Alimo.
Group:GALCIT
Funders:
Funding AgencyGrant Number
Swiss National Science Foundation (SNSF)UNSPECIFIED
JPLUNSPECIFIED
Record Number:CaltechAUTHORS:20191007-132043345
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20191007-132043345
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:99116
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:07 Oct 2019 20:29
Last Modified:07 Oct 2019 20:29

Repository Staff Only: item control page