CaltechAUTHORS
  A Caltech Library Service

Multi-Target Embodied Question Answering

Yu, Licheng and Chen, Xinlei and Gkioxari, Georgia and Bansal, Mohit and Berg, Tamara L. and Batra, Dhruv (2019) Multi-Target Embodied Question Answering. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE , Piscataway, NJ, pp. 6302-6311. ISBN 978-1-7281-3293-8. https://resolver.caltech.edu/CaltechAUTHORS:20221215-789767000.16

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20221215-789767000.16

Abstract

Embodied Question Answering (EQA) is a relatively new task where an agent is asked to answer questions about its environment from egocentric perception. EQA as introduced in [8] makes the fundamental assumption that every question, e.g., "what color is the car?", has exactly one target ("car") being inquired about. This assumption puts a direct limitation on the abilities of the agent. We present a generalization of EQA -- Multi-Target EQA (MT-EQA). Specifically, we study questions that have multiple targets in them, such as "Is the dresser in the bedroom bigger than the oven in the kitchen?", where the agent has to navigate to multiple locations ("dresser in bedroom", "oven in kitchen") and perform comparative reasoning ("dresser" bigger than ``oven") before it can answer a question. Such questions require the development of entirely new modules or components in the agent. To address this, we propose a modular architecture composed of a program generator, a controller, a navigator, and a VQA module. The program generator converts the given question into sequential executable sub-programs; the navigator guides the agent to multiple locations pertinent to the navigation-related sub-programs; and the controller learns to select relevant observations along its path. These observations are then fed to the VQA module to predict the answer. We perform detailed analysis for each of the model components and show that our joint model can outperform previous methods and strong baselines by a significant margin.


Item Type:Book Section
Related URLs:
URLURL TypeDescription
https://doi.org/10.1109/CVPR.2019.00647DOIArticle
https://resolver.caltech.edu/CaltechAUTHORS:20221219-204819572Related ItemDiscussion Paper
ORCID:
AuthorORCID
Berg, Tamara L.0000-0002-1272-3359
Additional Information:We thank Abhishek Das, Devi Parikh and Marcus Rohrbach for helpful discussions. This work is supported by NSF Awards #1633295, 1562098, 1405822, and Facebook.
Funders:
Funding AgencyGrant Number
NSFIIS-1633295
NSFIIS-1562098
NSFCNS-1405822
FacebookUNSPECIFIED
DOI:10.1109/cvpr.2019.00647
Record Number:CaltechAUTHORS:20221215-789767000.16
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20221215-789767000.16
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:118376
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:20 Dec 2022 15:33
Last Modified:20 Dec 2022 15:33

Repository Staff Only: item control page