A Caltech Library Service

Bayesian model of dynamic image stabilization in the visual system

Burak, Yoram and Rokni, Uri and Meister, Markus and Sompolinsky, Haim (2010) Bayesian model of dynamic image stabilization in the visual system. Proceedings of the National Academy of Sciences of the United States of America, 107 (45). pp. 19525-19530. ISSN 0027-8424. PMCID PMC2984143.

[img] PDF - Published Version
See Usage Policy.

[img] PDF - Supplemental Material
See Usage Policy.


Use this Persistent URL to link to this item:


Humans can resolve the fine details of visual stimuli although the image projected on the retina is constantly drifting relative to the photoreceptor array. Here we demonstrate that the brain must take this drift into account when performing high acuity visual tasks. Further, we propose a decoding strategy for interpreting the spikes emitted by the retina, which takes into account the ambiguity caused by retinal noise and the unknown trajectory of the projected image on the retina. A main difficulty, addressed in our proposal, is the exponentially large number of possible stimuli, which renders the ideal Bayesian solution to the problem computationally intractable. In contrast, the strategy that we propose suggests a realistic implementation in the visual cortex. The implementation involves two populations of cells, one that tracks the position of the image and another that represents a stabilized estimate of the image itself. Spikes from the retina are dynamically routed to the two populations and are interpreted in a probabilistic manner. We consider the architecture of neural circuitry that could implement this strategy and its performance under measured statistics of human fixational eye motion. A salient prediction is that in high acuity tasks, fixed features within the visual scene are beneficial because they provide information about the drifting position of the image. Therefore, complete elimination of peripheral features in the visual scene should degrade performance on high acuity tasks involving very small stimuli.

Item Type:Article
Related URLs:
URLURL TypeDescription Information CentralArticle
Meister, Markus0000-0003-2136-6506
Additional Information:© 2010 National Academy of Sciences. Edited by William T. Newsome, Stanford University, Stanford, CA, and approved September 17, 2010 (received for review May 8, 2010). Published online before print October 11, 2010. We thank Dan Lee, Ofer Mazor, and Xaq Pitkow for helpful discussions and Eran Mukamel for comments on the manuscript. We acknowledge support from the Swartz Foundation (Y.B. and U.R.), the National Eye Institute (M.M.), the Israeli Science Foundation (H.S.), and the Israeli Ministry of Defense (H.S.). Author contributions: Y.B., U.R., M.M., and H.S. designed research; Y.B., U.R., and H.S. performed research; and Y.B., M.M., and H.S. wrote the paper. The authors declare no conflict of interest. This article is a PNAS Direct Submission. This article contains supporting information online at
Funding AgencyGrant Number
Swartz FoundationUNSPECIFIED
National Eye InstituteUNSPECIFIED
Israeli Science FoundationUNSPECIFIED
Ministry of Defense (Israel)UNSPECIFIED
Subject Keywords:computation; fixational eye motion; neural network; retina; cortex
Issue or Number:45
PubMed Central ID:PMC2984143
Record Number:CaltechAUTHORS:20170404-134032704
Persistent URL:
Official Citation:Yoram Burak, Uri Rokni, Markus Meister, and Haim Sompolinsky Bayesian model of dynamic image stabilization in the visual system PNAS 2010 107 (45) 19525-19530; published ahead of print October 11, 2010, doi:10.1073/pnas.1006076107
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:75699
Deposited By: Tony Diaz
Deposited On:04 Apr 2017 21:40
Last Modified:03 Oct 2019 16:53

Repository Staff Only: item control page