CaltechAUTHORS
  A Caltech Library Service

Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and Reasoning

Nie, Weili and Yu, Zhiding and Mao, Lei and Patel, Ankit B. and Zhu, Yuke and Anandkumar, Animashree (2020) Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and Reasoning. In: Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020). Advances in Neural Information Processing Systems . https://resolver.caltech.edu/CaltechAUTHORS:20201109-074710530

[img] PDF - Published Version
See Usage Policy.

1025Kb
[img] PDF - Submitted Version
See Usage Policy.

4Mb
[img] PDF (Appendix) - Supplemental Material
See Usage Policy.

2952Kb

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20201109-074710530

Abstract

Humans have an inherent ability to learn novel concepts from only a few samples and generalize these concepts to different situations. Even though today's machine learning models excel with a plethora of training data on standard recognition tasks, a considerable gap exists between machine-level pattern recognition and human-level concept learning. To narrow this gap, the Bongard Problems (BPs) were introduced as an inspirational challenge for visual cognition in intelligent systems. Albeit new advances in representation learning and learning to learn, BPs remain a daunting challenge for modern AI. Inspired by the original one hundred BPs, we propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning. We develop a program-guided generation technique to produce a large set of human-interpretable visual cognition problems in action-oriented LOGO language. Our benchmark captures three core properties of human cognition: 1) context-dependent perception, in which the same object may have disparate interpretations given different contexts; 2) analogy-making perception, in which some meaningful concepts are traded off for other meaningful concepts; and 3) perception with a few samples but infinite vocabulary. In experiments, we show that the state-of-the-art deep learning methods perform substantially worse than human subjects, implying that they fail to capture core human cognition properties. Finally, we discuss research directions towards a general architecture for visual reasoning to tackle this benchmark.


Item Type:Book Section
Related URLs:
URLURL TypeDescription
https://proceedings.neurips.cc/paper/2020/hash/bf15e9bbff22c7719020f9df4badc20a-Abstract.htmlPublisherArticle
https://arxiv.org/abs/2010.00763arXivDiscussion Paper
ORCID:
AuthorORCID
Zhu, Yuke0000-0002-9198-2227
Additional Information:We thank the anonymous reviewers for useful comments. We also thank all the human subjects for participating in our BONGARD-LOGO human study, and the entire AIALGO team at NVIDIA for their valuable feedback. WN conducted this research during an internship at NVIDIA. WN and ABP were supported by IARPA via DoI/IBC contract D16PC00003.
Funders:
Funding AgencyGrant Number
NVIDIA CorporationUNSPECIFIED
Intelligence Advanced Research Projects Activity (IARPA)D16PC00003
Record Number:CaltechAUTHORS:20201109-074710530
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20201109-074710530
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:106499
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:09 Nov 2020 16:24
Last Modified:09 Nov 2020 16:24

Repository Staff Only: item control page