CaltechAUTHORS
  A Caltech Library Service

Physically optimizing inference

Huang, Audrey and Sheldan, Benjamin and Sivak, David A. and Thomson, Matt (2018) Physically optimizing inference. . (Submitted) http://resolver.caltech.edu/CaltechAUTHORS:20181022-162450293

[img] PDF - Submitted Version
See Usage Policy.

3165Kb

Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20181022-162450293

Abstract

Data is scaling exponentially in fields ranging from genomics to neuroscience to economics. A central question is: can modern machine learning methods be applied to construct predictive models of natural systems like cells and brains based on large data sets? In this paper, we examine how inference is impacted when training data is generated by the statistical behavior of a physical system, and hence outside direct control by the experimentalist. We develop an information-theoretic analysis for the canonical problem of spin-network inference. Our analysis reveals the essential role that the physical properties of the spin network and its environment play in determining the difficulty of the underlying machine learning problem. Specifically, stochastic fluctuations drive a system to explore a range of configurations providing `raw' information for a learning algorithm to construct an accurate model; yet they also blur energetic differences between network states and thereby degrade information. This competition leads spin networks to generically have an intrinsic optimal temperature at which stochastic spin fluctuations provide maximal information for discriminating among competing models, maximizing inference efficiency. We demonstrate a simple active learning protocol that optimizes network temperature to boost inference efficiency and dramatically increases the efficiency of inference on a neural circuit reconstruction task. Our results reveal a fundamental link between physics and information and show how the physical environment can be tuned to optimize the efficiency of machine learning.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/1805.07512arXivDiscussion Paper
Additional Information:This work was supported by grants from the NIH (NIH DP5 OD01219), the CZI, Amgen, and Rosen Center at Caltech (MT); a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant, the Canada Research Chairs program, and the Faculty of Science, Simon Fraser University through the President's Research Start-up Grant (DAS); and WestGrid (www.westgrid.ca) and Compute Canada Calcul Canada (www.computecanada.ca). The authors thank Andrew Stuart, David van Valen, Joel Tropp, Rob Phillips, John Doyle, Carl Pabo, Malcolm Kennett, Emma Lathouwers, and Erik Winfree for useful discussions and feedback on the manuscript.
Group:Rosen Bioengineering Center
Funders:
Funding AgencyGrant Number
NIHDP5 OD01219
Chan Zuckerberg InitiativeUNSPECIFIED
AmgenUNSPECIFIED
Donna and Benjamin M. Rosen Bioengineering CenterUNSPECIFIED
Natural Sciences and Engineering Research Council of Canada (NSERC)UNSPECIFIED
Canada Research Chairs ProgramUNSPECIFIED
Simon Fraser UniversityUNSPECIFIED
WestGridUNSPECIFIED
Compute CanadaUNSPECIFIED
Subject Keywords:learning, inference, Fisher information, inverse Ising problem, spin network, control, prediction
Record Number:CaltechAUTHORS:20181022-162450293
Persistent URL:http://resolver.caltech.edu/CaltechAUTHORS:20181022-162450293
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:90338
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:23 Oct 2018 22:03
Last Modified:15 May 2019 16:31

Repository Staff Only: item control page