Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published April 18, 2024 | Submitted
Discussion Paper Open

Automated construction of cognitive maps with predictive coding

  • 1. ROR icon California Institute of Technology

Abstract

Humans construct internal cognitive maps of their environment directly from sensory inputs without access to a system of explicit coordinates or distance measurements. While machine learning algorithms like SLAM utilize specialized inference procedures to identify visual features and construct spatial maps from visual and odometry data, the general nature of cognitive maps in the brain suggests a unified mapping algorithmic strategy that can generalize to auditory, tactile, and linguistic inputs. Here, we demonstrate that predictive coding provides a natural and versatile neural network algorithm for constructing spatial maps using sensory data. We introduce a framework in which an agent navigates a virtual environment while engaging in visual predictive coding using a self-attention-equipped convolutional neural network. While learning a next image prediction task, the agent automatically constructs an internal representation of the environment that quantitatively reflects spatial distances. The internal map enables the agent to pinpoint its location relative to landmarks using only visual information.The predictive coding network generates a vectorized encoding of the environment that supports vector navigation where individual latent space units delineate localized, overlapping neighborhoods in the environment. Broadly, our work introduces predictive coding as a unified algorithmic framework for constructing cognitive maps that can naturally extend to the mapping of auditory, sensorimotor, and linguistic inputs.

Copyright and License

The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.

Acknowledgement

We deeply appreciate Inna Strazhnik for her exceptional contributions to the scientific visualizations and figure illustrations. Her expertise in translating our research into clear visuals has significantly elevated the clarity and impact of our paper. We express our heartfelt gratitude to Thanos Siapas, Evgueniy Lubenov, Dean Mobbs, and Matthew Rosenberg for their invaluable and insightful discussions which profoundly enriched our work. Their expertise and feedback have been instrumental in the development and realization of this research. Additionally, we appreciate the insights provided by Lixiang Xu, Meng Wang, and Jieyu Zheng, which played a crucial role in refining various aspects of our study. The dedication and collaborative spirit of this collective group have truly elevated our research, and for that, we are deeply thankful.

Data Availability

All datasets supporting the findings of this study, including the latent variables for the autoencoding and predictive coding neural networks, as well as the training and validation datasets, are available on GitHub at https://github.com/jgornet/predictive-coding-recovers-maps. Researchers and readers interested in accessing the data for replication, verification, or further studies can contact the corresponding author or refer to the supplementary materials section for more details.

Code Availability

The code supporting the conclusions of this study is available on GitHub at https://github.com/jgornet/predictive-coding-recovers-maps. The repository contains the Malmo environment code, training scripts for both the predictive coding and autoencoding neural networks, as well as code for the analysis of predictive coding and autoencoding results. Should there be any questions or need for clarifications about the codebase, we encourage readers to raise an issue on the repository or reach out to the corresponding author.

Conflict of Interest

The authors have declared no competing interest.

Files

2023.09.18.558369v2.full.pdf
Files (14.4 MB)
Name Size Download all
md5:e8a5f9d07db786b88ab7c9c98bc4b1a0
5.4 MB Preview Download
md5:f2b2a3f7431359bde814acb7b0435483
8.9 MB Preview Download

Additional details

Created:
April 23, 2024
Modified:
April 23, 2024