Shi, Guanya and Zhu, Yifeng and Tremblay, Jonathan and Birchfield, Stan and Ramos, Fabio and Anandkumar, Animashree and Zhu, Yuke (2021) Fast Uncertainty Quantification for Deep Object Pose Estimation. In: 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE , Piscataway, NJ, pp. 5200-5207. ISBN 978-1-7281-9077-8. https://resolver.caltech.edu/CaltechAUTHORS:20210225-132731801
![]() |
PDF
- Submitted Version
See Usage Policy. 3MB |
Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20210225-132731801
Abstract
Deep learning-based object pose estimators are often unreliable and overconfident especially when the input image is outside the training domain, for instance, with sim2real transfer. Efficient and robust uncertainty quantification (UQ) in pose estimators is critically needed in many robotic tasks. In this work, we propose a simple, efficient, and plug-and-play UQ method for 6-DoF object pose estimation. We ensemble 2–3 pre-trained models with different neural network architectures and/or training data sources, and compute their average pair-wise disagreement against one another to obtain the uncertainty quantification. We propose four disagreement metrics, including a learned metric, and show that the average distance (ADD) is the best learning-free metric and it is only slightly worse than the learned metric, which requires labeled target data. Our method has several advantages compared to the prior art: 1) our method does not require any modification of the training process or the model inputs; and 2) it needs only one forward pass for each model. We evaluate the proposed UQ method on three tasks where our uncertainty quantification yields much stronger correlations with pose estimation errors than the baselines. Moreover, in a real robot grasping task, our method increases the grasping success rate from 35% to 90%. Video and code are available at https://sites.google.com/view/fastuq.
Item Type: | Book Section | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Related URLs: |
| ||||||||||||
ORCID: |
| ||||||||||||
Additional Information: | © 2021 IEEE. We would like to thank members of the NVIDIA AI Algorithms research team for their constructive feedback and Nathan Morrical for his help with the NViSII renderer. | ||||||||||||
DOI: | 10.1109/ICRA48506.2021.9561483 | ||||||||||||
Record Number: | CaltechAUTHORS:20210225-132731801 | ||||||||||||
Persistent URL: | https://resolver.caltech.edu/CaltechAUTHORS:20210225-132731801 | ||||||||||||
Official Citation: | G. Shi et al., "Fast Uncertainty Quantification for Deep Object Pose Estimation," 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 5200-5207, doi: 10.1109/ICRA48506.2021.9561483 | ||||||||||||
Usage Policy: | No commercial reproduction, distribution, display or performance rights in this work are provided. | ||||||||||||
ID Code: | 108208 | ||||||||||||
Collection: | CaltechAUTHORS | ||||||||||||
Deposited By: | George Porter | ||||||||||||
Deposited On: | 26 Feb 2021 15:09 | ||||||||||||
Last Modified: | 03 May 2022 17:43 |
Repository Staff Only: item control page