CaltechAUTHORS
  A Caltech Library Service

Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments

Van Valen, David A. and Kudo, Takamasa and Lane, Keara M. and Macklin, Derek N. and Quach, Nicolas T. and DeFelice, Mialy M. and Maayan, Inbal and Tanouchi, Yu and Ashley, Euan A. and Covert, Markus W. (2016) Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments. PLOS Computational Biology, 12 (11). Art. No. e1005177. ISSN 1553-7358. PMCID PMC5096676. doi:10.1371/journal.pcbi.1005177. https://resolver.caltech.edu/CaltechAUTHORS:20180613-111613718

[img] PDF - Published Version
Creative Commons Attribution.

2MB
[img] MS Word (S1 Text. Supplemental text) - Supplemental Material
Creative Commons Attribution.

30kB
[img] Image (TIFF) (S1 Fig. Sample image, conv-net interior prediction map, segmentation mask, and training/validation error curves for E. coli) - Supplemental Material
Creative Commons Attribution.

3MB
[img] Image (TIFF) (S2 Fig. Sample image, conv-net interior prediction map, segmentation mask, and training/validation error curves for MCF10A cells) - Supplemental Material
Creative Commons Attribution.

4MB
[img] Image (TIFF) (S3 Fig. Sample image, conv-net interior prediction map, segmentation mask, and training/validation error curves for NIH-3T3 cells) - Supplemental Material
Creative Commons Attribution.

4MB
[img] Image (TIFF) (S4 Fig. Sample image, conv-net interior prediction map, segmentation mask, and training/validation error curves for HeLa-S3 cells) - Supplemental Material
Creative Commons Attribution.

3MB
[img] Image (TIFF) (S5 Fig. Sample image, conv-net interior prediction map, segmentation mask, and training/validation error curves for RAW 264.7 cells) - Supplemental Material
Creative Commons Attribution.

3MB
[img] Image (TIFF) (S6 Fig. Sample image, conv-net interior prediction map, segmentation mask, and training/validation error curves for bone marrow derived macrophages) - Supplemental Material
Creative Commons Attribution.

3MB
[img] Image (TIFF) (S7 Fig. Sample image, conv-net interior prediction map, segmentation mask, and training/validation error curves for H2B-iRFP labeled and DAPI stained nuclei.) - Supplemental Material
Creative Commons Attribution.

2MB
[img] Image (TIFF) (S8 Fig. Sample image, conv-net interior prediction map, segmentation mask, and training/validation error curves for semantic segmentation of MCF10A cells and NIH-3T3 cells) - Supplemental Material
Creative Commons Attribution.

8MB
[img] Image (TIFF) (S9 Fig. Additional semantic segmentation of NIH-3T3 and MCF10A cells. 286 cells were analyzed, including 93 3T3 cells and 192 MCF 10A cells. The classification accuracy was 89% for NIH-3T3 cells and 98% for MCF10A cells) - Supplemental Material
Creative Commons Attribution.

11MB
[img] Image (TIFF) (S10 Fig. Sensitivity analysis of the influence of the cytoring size on the dynamics of the JNK-KTR) - Supplemental Material
Creative Commons Attribution.

1MB
[img] Image (TIFF) (S11 Fig. Areas of the different cytorings used in S10 Fig) - Supplemental Material
Creative Commons Attribution.

4MB
[img] Image (TIFF) (S12 Fig. Histogram of the instantaneous growth rate for a bacterial micro-colony. This histogram is identical to the histogram showed in Fig 3, with the axes expanded to show the negative growth rates corresponding to cell division) - Supplemental Material
Creative Commons Attribution.

425kB
[img] Image (TIFF) (S13 Fig. Poorly performing conv-nets) - Supplemental Material
Creative Commons Attribution.

12MB
[img] Image (TIFF) (S14 Fig. Training and validation error for conv-nets to assess the performance improvements provided by dropout, batch normalization, shearing for data augmentation, and multi-resolution fully connected layers.) - Supplemental Material
Creative Commons Attribution.

652kB
[img] Image (TIFF) (S15 Fig. Regularization optimization) - Supplemental Material
Creative Commons Attribution.

851kB
[img] Image (TIFF) (S16 Fig. Segmentation accuracy vs. cell density for HeLa-S3 cells) - Supplemental Material
Creative Commons Attribution.

373kB
[img] Image (TIFF) (S17 Fig. Comparison of segmentation performance of conv-nets and Ilastik on a HeLa cell validation data set) - Supplemental Material
Creative Commons Attribution.

913kB
[img] Video (AVI) (S1 Movie. Phase images of a growing E. coli micro colony) - Supplemental Material
Creative Commons Attribution.

1MB
[img] Video (AVI) (S2 Movie. Bacteria-net output when used to process S1 Movie) - Supplemental Material
Creative Commons Attribution.

2MB
[img] Video (AVI) (S3 Movie. Phase images of HeLa-S3 cells expressing the JNK-KTR) - Supplemental Material
Creative Commons Attribution.

3MB
[img] Video (AVI) (S4 Movie. Segmentation of S3 Movie) - Supplemental Material
Creative Commons Attribution.

6MB
[img] Video (AVI) (S5 Movie. Nuclear marker channel for S3 Movie) - Supplemental Material
Creative Commons Attribution.

4MB
[img] Video (AVI) (S6 Movie. JNK-KTR channel for S3 Movie.) - Supplemental Material
Creative Commons Attribution.

2MB
[img] Video (AVI) (S7 Movie. Representative movie of HeLa-S3 with overlaying nuclear marker and segmentation boundaries) - Supplemental Material
Creative Commons Attribution.

11MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20180613-111613718

Abstract

Live-cell imaging has opened an exciting window into the role cellular heterogeneity plays in dynamic, living systems. A major critical challenge for this class of experiments is the problem of image segmentation, or determining which parts of a microscope image correspond to which individual cells. Current approaches require many hours of manual curation and depend on approaches that are difficult to share between labs. They are also unable to robustly segment the cytoplasms of mammalian cells. Here, we show that deep convolutional neural networks, a supervised machine learning method, can solve this challenge for multiple cell types across the domains of life. We demonstrate that this approach can robustly segment fluorescent images of cell nuclei as well as phase images of the cytoplasms of individual bacterial and mammalian cells from phase contrast images without the need for a fluorescent cytoplasmic marker. These networks also enable the simultaneous segmentation and identification of different mammalian cell types grown in co-culture. A quantitative comparison with prior methods demonstrates that convolutional neural networks have improved accuracy and lead to a significant reduction in curation time. We relay our experience in designing and optimizing deep convolutional neural networks for this task and outline several design rules that we found led to robust performance. We conclude that deep convolutional neural networks are an accurate method that require less curation time, are generalizable to a multiplicity of cell types, from bacteria to mammalian cells, and expand live-cell imaging capabilities to include multi-cell type systems.


Item Type:Article
Related URLs:
URLURL TypeDescription
https://doi.org/10.1371/journal.pcbi.1005177DOIArticle
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5096676/PubMed CentralArticle
ORCID:
AuthorORCID
Van Valen, David A.0000-0001-7534-7621
Kudo, Takamasa0000-0002-9709-5549
DeFelice, Mialy M.0000-0002-7197-6292
Maayan, Inbal0000-0003-4907-0723
Covert, Markus W.0000-0002-5993-8912
Additional Information:© 2016 Van Valen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Received: May 18, 2016; Accepted: October 3, 2016; Published: November 4, 2016. Editor: Martin Meier-Schellersheim, National Institutes of Health, UNITED STATES We gratefully acknowledge funding from several sources, including a Paul Allen Family Foundation Allen Distinguished Investigator award, a Paul Allen Family Foundation Allen Discovery Center Award and an NIH Pioneer Award (5DP1LM01150-05) to MWC, a Systems Biology Center grant (P50 GM107615), a DOE Computational Science Graduate Fellowship (DE-FG02-97ER25308) and a Siebel Scholarship to DNM, and a Stanford Biomedical Big Data Science Postdoctoral Fellowship as well as the Burroughs Wellcome Fund’s Postdoctoral Enrichment Program and an NIH F32 Postdoctoral Fellowship (1F32GM119319-01) to DVV. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We gratefully acknowledge funding from several sources, including a Paul Allen Family Foundation Allen Distinguished Investigator award, a Paul Allen Family Foundation Allen Discovery Center Award and an NIH Pioneer Award (5DP1LM01150-05) to MWC, a Systems Biology Center grant (P50 GM107615), a DOE Computational Science Graduate Fellowship (DE-FG02-97ER25308) and a Siebel Scholarship to DNM, and a Stanford Biomedical Big Data Science Postdoctoral Fellowship as well as the Burroughs Wellcome Fund’s Postdoctoral Enrichment Program and an NIH F32 Postdoctoral Fellowship (1F32GM119319-01) to DVV. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We are deeply grateful to a number of people for advice, assistance with experiments, and critical commentary on the manuscript, including Zev Bryant, Joe Levine, Fei Fei Li, Andrej Karpathy, Henrietta Lacks, Amanda Miguel, K.C. Huang, Robert Brewster, Rob Phillips, John Phillips, Jan Liphardt, Andy Spakowitz, and members of the Covert lab. Author Contributions Conceptualization: DAVV MWC. Data curation: DAVV KML NTQ MMD IM. Formal analysis: DAVV. Funding acquisition: DAVV TK KML DNM NTQ MMD IM YT EAA MWC. Investigation: DAVV TK KML DNM NTQ MMD IM YT EAA MWC. Methodology: DAVV. Project administration: DAVV MWC. Resources: DAVV TK KML DNM NTQ MMD IM YT. Software: DAVV TK DNM NTQ. Supervision: EAA MWC. Validation: DAVV MMD KML IM YT NTQ. Visualization: DAVV. Writing – original draft: DAVV MWC. Writing – review & editing: DAVV TK KML DNM NTQ MMD IM YT EAA MWC. Data Availability: All data and software are available at the NIH-funded repository SIMTK (https://simtk.org/projects/deepcell). The authors have declared that no competing interests exist.
Funders:
Funding AgencyGrant Number
Paul G. Allen Family FoundationUNSPECIFIED
NIH5DP1LM01150-05
NIHP50 GM107615
Department of Energy (DOE)DE-FG02-97ER25308
Siebel Scholars FoundationUNSPECIFIED
Stanford Medical SchoolUNSPECIFIED
Burroughs Wellcome FundUNSPECIFIED
NIH Postdoctoral Fellowship1F32GM119319-01
Issue or Number:11
PubMed Central ID:PMC5096676
DOI:10.1371/journal.pcbi.1005177
Record Number:CaltechAUTHORS:20180613-111613718
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20180613-111613718
Official Citation:Van Valen DA, Kudo T, Lane KM, Macklin DN, Quach NT, DeFelice MM, et al. (2016) Deep Learning Automates the Quantitative Analysis of Individual Cells in Live-Cell Imaging Experiments. PLoS Comput Biol 12(11): e1005177. https://doi.org/10.1371/journal.pcbi.1005177
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:87066
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:13 Jun 2018 22:15
Last Modified:15 Nov 2021 20:44

Repository Staff Only: item control page