A Caltech Library Service

Learning 3D Object Shape and Layout without 3D Supervision

Gkioxari, Georgia and Ravi, Nikhila and Johnson, Justin (2022) Learning 3D Object Shape and Layout without 3D Supervision. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE , Piscataway, NJ, pp. 1685-1694. ISBN 978-1-6654-6946-3.

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item:


A 3D scene consists of a set of objects, each with a shape and a layout giving their position in space. Understanding 3D scenes from 2D images is an important goal, with ap-plications in robotics and graphics. While there have been recent advances in predicting 3D shape and layout from a single image, most approaches rely on 3D ground truth for training which is expensive to collect at scale. We overcome these limitations and propose a method that learns to predict 3D shape and layout for objects without any ground truth shape or layout information: instead we rely on multi-view images with 2D supervision which can more easily be col-lected at scale. Through extensive experiments on ShapeNet, Hypersim, and ScanNet we demonstrate that our approach scales to large datasets of realistic images, and compares favorably to methods relying on 3D ground truth. On Hy-persim and ScanNet where reliable 3D ground truth is not available, our approach outperforms supervised approaches trained on smaller and less diverse datasets. Project page

Item Type:Book Section
Related URLs:
URLURL TypeDescription ItemDiscussion Paper
Ravi, Nikhila0000-0003-0097-5222
Johnson, Justin0000-0002-1251-088X
Record Number:CaltechAUTHORS:20221215-789795000.26
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:118382
Deposited By: George Porter
Deposited On:19 Dec 2022 20:49
Last Modified:19 Dec 2022 23:16

Repository Staff Only: item control page