Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published April 9, 2024 | Submitted
Discussion Paper Open

Reconstructing Hand-Held Objects in 3D

  • 1. ROR icon California Institute of Technology

Abstract

Objects manipulated by the hand (i.e., manipulanda) are particularly challenging to reconstruct from in-the-wild RGB images or videos. Not only does the hand occlude much of the object, but also the object is often only visible in a small number of image pixels. At the same time, two strong anchors emerge in this setting: (1) estimated 3D hands help disambiguate the location and scale of the object, and (2) the set of manipulanda is small relative to all possible objects. With these insights in mind, we present a scalable paradigm for handheld object reconstruction that builds on recent breakthroughs in large language/vision models and 3D object datasets. Our model, MCC-Hand-Object (MCC-HO), jointly reconstructs hand and object geometry given a single RGB image and inferred 3D hand as inputs. Subsequently, we use GPT-4(V) to retrieve a 3D object model that matches the object in the image and rigidly align the model to the network-inferred geometry; we call this alignment Retrieval-Augmented Reconstruction (RAR). Experiments demonstrate that MCC-HO achieves state-of-the-art performance on lab and Internet datasets, and we show how RAR can be used to automatically obtain 3D labels for in-the-wild images of hand-object interactions.

Acknowledgement

We would like to thank Ilija Radosavovic for helpful discussions and feedback. This work was supported by ONR MURI N00014-21-1-2801. J. W. was supported by the NSF Mathematical Sciences Postdoctoral Fellowship and the UC President’s Postdoctoral Fellowship.

Files

2404.06507v2.pdf
Files (8.1 MB)
Name Size Download all
md5:ff3e59b8741460fb79b689e7b0a89904
8.1 MB Preview Download

Additional details

Created:
June 17, 2024
Modified:
June 17, 2024