Published February 2, 2023 | Published
Journal Article Open

Deep Learning Solution for Quantification of Fluorescence Particles on a Membrane

Abstract

The detection and quantification of severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) virus particles in ambient waters using a membrane-based in-gel loop-mediated isothermal amplification (mgLAMP) method can play an important role in large-scale environmental surveillance for early warning of potential outbreaks. However, counting particles or cells in fluorescence microscopy is an expensive, time-consuming, and tedious task that only highly trained technicians and researchers can perform. Although such objects are generally easy to identify, manually annotating cells is occasionally prone to fatigue errors and arbitrariness due to the operator's interpretation of borderline cases. In this research, we proposed a method to detect and quantify multiscale and shape variant SARS-CoV-2 fluorescent cells generated using a portable (mgLAMP) system and captured using a smartphone camera. The proposed method is based on the YOLOv5 algorithm, which uses CSPnet as its backbone. CSPnet is a recently proposed convolutional neural network (CNN) that duplicates gradient information within the network using a combination of Dense nets and ResNet blocks, and bottleneck convolution layers to reduce computation while at the same time maintaining high accuracy. In addition, we apply the test time augmentation (TTA) algorithm in conjunction with YOLO's one-stage multihead detection heads to detect all cells of varying sizes and shapes. We evaluated the model using a private dataset provided by the Linde + Robinson Laboratory, California Institute of Technology, United States. The model achieved a mAP@0.5 score of 90.3 in the YOLOv5-s6.

Additional Information

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). (This article belongs to the Collection Artificial Intelligence (AI) in Biomedical Imaging). The authors would like to thank Jing Li and Sean McBeath for their help with getting microscope and smartphone camera images. This research was funded by Bill & Melinda Gates Foundation, Grant INV-030223. Author Contributions. Conceptualization, A.Z.S., A.B. and A.T.-A.; dataset collecting, C.A.C. and L.D.; data annotation, A.Z.S. and C.A.C. software, A.Z.S., A.B. and A.T.-A.; validation, A.B., A.T.-A. and C.A.C.; formal analysis, A.B. and A.S.; investigation, A.T.-A. and A.B.; writing—review and editing, A.T.-A., C.A.C. and Y.E.H.; visualization, A.Z.S., A.B. and A.T.-A.; supervision, A.T.-A.; project administration, A.B. All authors have read and agreed to the published version of the manuscript. The authors declare no conflict of interest.

Attached Files

Published - sensors-23-01794-v2.pdf

Files

sensors-23-01794-v2.pdf
Files (20.7 MB)
Name Size Download all
md5:df700834359fef07482aa33aa80c619c
20.7 MB Preview Download

Additional details

Created:
August 22, 2023
Modified:
October 18, 2023