A Caltech Library Service

Brain-inspired automated visual object discovery and detection

Chen, Lichao and Singh, Sudhir and Kailath, Thomas and Roychowdhury, Vwani (2019) Brain-inspired automated visual object discovery and detection. Proceedings of the National Academy of Sciences of the United States of America, 116 (1). pp. 96-105. ISSN 0027-8424. PMCID PMC6320548. doi:10.1073/pnas.1802103115.

[img] PDF - Published Version
See Usage Policy.

[img] PDF - Supplemental Material
See Usage Policy.


Use this Persistent URL to link to this item:


Despite significant recent progress, machine vision systems lag considerably behind their biological counterparts in performance, scalability, and robustness. A distinctive hallmark of the brain is its ability to automatically discover and model objects, at multiscale resolutions, from repeated exposures to unlabeled contextual data and then to be able to robustly detect the learned objects under various nonideal circumstances, such as partial occlusion and different view angles. Replication of such capabilities in a machine would require three key ingredients: (i) access to large-scale perceptual data of the kind that humans experience, (ii) flexible representations of objects, and (iii) an efficient unsupervised learning algorithm. The Internet fortunately provides unprecedented access to vast amounts of visual data. This paper leverages the availability of such data to develop a scalable framework for unsupervised learning of object prototypes—brain-inspired flexible, scale, and shift invariant representations of deformable objects (e.g., humans, motorcycles, cars, airplanes) comprised of parts, their different configurations and views, and their spatial relationships. Computationally, the object prototypes are represented as geometric associative networks using probabilistic constructs such as Markov random fields. We apply our framework to various datasets and show that our approach is computationally scalable and can construct accurate and operational part-aware object models much more efficiently than in much of the recent computer vision literature. We also present efficient algorithms for detection and localization in new scenes of objects and their partial views.

Item Type:Article
Related URLs:
URLURL TypeDescription Information CentralArticle
Additional Information:© 2018 National Academy of Sciences. Published under the PNAS license. Contributed by Thomas Kailath, April 23, 2018 (sent for review February 12, 2018; reviewed by Rama Chellappa, Shree Nayar, and Erik Sudderth). PNAS published ahead of print December 17, 2018. The authors thank Prof. Lieven Vandenberghe for his input on the optimization formulations used in the paper and the referees for helpful suggestions and especially for pointing us to relevant prior work. Author contributions: L.C., S.S., T.K., and V.R. designed research; L.C., S.S., T.K., and V.R. performed research; L.C. and V.R. analyzed data; and L.C., T.K., and V.R. wrote the paper. Reviewers: R.C., University of Maryland, College Park; S.N., Columbia University; and E.K., University of California, Irvine. The authors declare no conflict of interest. Data deposition: The in-house dataset used in the paper is shared publicly at This article contains supporting information online at
Subject Keywords:computer vision; brain-inspired learning; brain-inspired object models; machine learning; brain memory models
Issue or Number:1
PubMed Central ID:PMC6320548
Record Number:CaltechAUTHORS:20181218-105353200
Persistent URL:
Official Citation:Brain-inspired automated visual object discovery and detection. Lichao Chen, Sudhir Singh, Thomas Kailath, Vwani Roychowdhury. Proceedings of the National Academy of Sciences Jan 2019, 116 (1) 96-105; DOI: 10.1073/pnas.1802103115
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:91893
Deposited By: Tony Diaz
Deposited On:18 Dec 2018 19:11
Last Modified:16 Nov 2021 03:45

Repository Staff Only: item control page