Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published March 26, 2024 | in press
Journal Article

The multimodality cell segmentation challenge: toward universal solutions

Abstract

Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyper-parameters in different experimental settings. Here, we present a multimodality cell segmentation benchmark, comprising more than 1,500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging.

Copyright and License

© The Author(s), under exclusive licence to Springer Nature America, Inc. 2024.

Acknowledgement

This work was supported by the Natural Sciences and Engineering Research Council of Canada (RGPIN-2020-06189 and DGECR-2020-00294) and CIFAR AI Chair programs. This research was enabled, in part, by computing resources provided by the Digital Research Alliance of Canada. We thank P. Byrne, M. Kost-Alimova, S. Singh and A.E. Carpenter for contributing U2OS and adipocyte images. We thank A.J. Radtke and R. Germain for contributing adenoid and tonsil whole-slide fluorescent images. We thank S. Banerjee for providing multiple myeloma plasma cell annotations in stained brightfield images. The platelet DIC images collected by C. Kempster and A. Pollitt were supported by the British Heart Foundation/NC3Rs (NC/S001441/1) grant. A.G. thanks the Department of Science and Technology, Government of India for the SERB-POWER fellowship (grant no. SPF/2021/000209) and the Infosys Centre for AI, IIIT-Delhi for the financial support to run this challenge. M.L., V.G., M.S. and S.J.R. were supported by SNSF grants CRSK-3_190526 and 310030_204938 awarded to S.J.R. E.U. and T.D. received funding from Priority Program 2041 (SPP 2041) ‘Computational Connectomics’ of the German Research Foundation and the Helmholtz Association’s Initiative and Networking Fund through the Helmholtz International BigBrain Analytics and Learning Laboratory under the Helmholtz International Laboratory grant agreement InterLabs-0015. The authors gratefully acknowledge the computing time granted through JARA on the supercomputer JURECA at Forschungszentrum Jülich. We also thank the grand-challenge platform for hosting the competition.

Contributions

These authors contributed equally: Shamini Ayyadhury, Cheng Ge, Anubha Gupta, Ritu Gupta, Song Gu, Yao Zhang.

J.M. conceived and designed the analysis, collected and cleaned the data, contributed analysis tools, managed challenge registration and evaluation, performed the analysis, wrote the initial manuscript and revised the manuscript; R.X. conceived and designed the analysis, managed challenge registration and evaluation and revised the manuscript. S.A., C.G., A.G., R.G., S.G. and Y.Z. conceived and designed the analysis, cleaned data, contributed labeled images, managed challenge registration and evaluation, performed the analysis and revised the manuscript. G.L., J.K., W.L., H.L., E.U. and T.D. participated in the challenge, developed the top-three algorithms and made the code publicly available. J.G.A., Y.W., L.H. and X.Y. cleaned data, contributed labeled images and managed challenge registration and evaluation. M.L., V.G., M.S., S.J.R., C.K., A.P., L.E., T.M. J.M.M. and J.-N.E., contributed new labeled data in the competition. W.L., Z.L., X.C. and B.B. participated in the challenge, developed algorithms and made the code publicly available. N.F.G., D.V.V., E.W., B.A.C. and O.B. contributed public or unlabeled data to the competition. T.C. managed the challenge registration and evaluation. G.D.B. and B.W. conceived and designed the analysis and wrote and revised the manuscript.

Data Availability

The dataset is available on the challenge website at https://neurips22-cellseg.grand-challenge.org/. It is also available on Zenodo at https://zenodo.org/records/10719375 (ref. 64). Source data are provided with this paper.

 

Code Availability

The top ten teams have made their code publicly available at https://neurips22-cellseg.grand-challenge.org/awards/. They are also available on Zenodo at https://zenodo.org/records/10718351.

Extended Data Fig. 1 Example segmentation results for four microscopy image modalities.

Extended Data Fig. 2 Example segmentation results for the post-challenge testing images.

Extended Data Fig. 3 Statistics of image size.

Supplementary Tables 1–8

Source Data

Conflict of Interest

S.G. is employed by Nanjing Anke Medical Technology Co. J.M.M. and J.-N.E. are co-owners of Cancilico. D.V.V. is a co-founder and chief scientist of Barrier Biosciences and holds equity in the company. O.B. declares the following competing financial interests: consultancy fees from Novartis, Sanofi and Amgen, outside the submitted work; research grants from Pfizer and Gilead Sciences, outside the submitted work; and stock ownership (Hematoscope) outside the submitted work. All other authors declare no competing interests.

Additional details

Created:
March 27, 2024
Modified:
March 27, 2024