CaltechAUTHORS
  A Caltech Library Service

Question Type Guided Attention in Visual Question Answering

Shi, Yang and Furlanello, Tommaso and Zha, Sheng and Anandkumar, Animashree (2018) Question Type Guided Attention in Visual Question Answering. In: Computer Vision – ECCV 2018. Lecture Notes in Computer Science. Vol.IV. No.11208. Springer Nature , Cham, Switzerland, pp. 158-175. ISBN 978-3-030-01224-3. https://resolver.caltech.edu/CaltechAUTHORS:20190327-085753056

[img] PDF (Computer Vision Foundation) - Accepted Version
See Usage Policy.

1157Kb
[img] PDF (arXiv) - Accepted Version
See Usage Policy.

2814Kb

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20190327-085753056

Abstract

Visual Question Answering (VQA) requires integration of feature maps with drastically different structures. Image descriptors have structures at multiple spatial scales, while lexical inputs inherently follow a temporal sequence and naturally cluster into semantically different question types. A lot of previous works use complex models to extract feature representations but neglect to use high-level information summary such as question types in learning. In this work, we propose Question Type-guided Attention (QTA). It utilizes the information of question type to dynamically balance between bottom-up and top-down visual features, respectively extracted from ResNet and Faster R-CNN networks. We experiment with multiple VQA architectures with extensive input ablation studies over the TDIUC dataset and show that QTA systematically improves the performance by more than 5% across multiple question type categories such as “Activity Recognition”, “Utility” and “Counting” on TDIUC dataset compared to the state-of-art. By adding QTA on the state-of-art model MCB, we achieve 3% improvement in overall accuracy. Finally, we propose a multi-task extension to predict question types which generalizes QTA to applications that lack question type, with a minimal performance loss.


Item Type:Book Section
Related URLs:
URLURL TypeDescription
https://doi.org/10.1007/978-3-030-01225-0_10DOIArticle
http://openaccess.thecvf.com/content_ECCV_2018/papers/Yang_Shi_Question_Type_Guided_ECCV_2018_paper.pdfOrganizationArticle
https://rdcu.be/btKkGPublisherFree ReadCube access
https://arxiv.org/abs/1804.02088arXivDiscussion Paper
Additional Information:© Springer Nature Switzerland AG 2018. Work partially done while the author was working at Amazon AI. We thank Amazon AI for providing computing resources. Yang Shi is supported by Air Force Award FA9550-15-1-0221.
Funders:
Funding AgencyGrant Number
Amazon AIUNSPECIFIED
Air Force Office of Scientific Research (AFOSR)FA9550-15-1-0221
Subject Keywords:Visual question answering, Attention, Question type, Feature selection, Multi-task
Series Name:Lecture Notes in Computer Science
Issue or Number:11208
Record Number:CaltechAUTHORS:20190327-085753056
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20190327-085753056
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:94175
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:29 Mar 2019 14:40
Last Modified:03 Oct 2019 21:01

Repository Staff Only: item control page