CaltechAUTHORS
  A Caltech Library Service

Deep learning acceleration of multiscale superresolution localization photoacoustic imaging

Kim, Jongbeom and Kim, Gyuwon and Li, Lei and Zhang, Pengfei and Kim, Jin Young and Kim, Yeonggeun and Kim, Hyung Ham and Wang, Lihong V. and Lee, Seungchul and Kim, Chulhong (2022) Deep learning acceleration of multiscale superresolution localization photoacoustic imaging. Light: Science & Applications, 11 . Art. No. 131. ISSN 2047-7538. PMCID PMC9095876. doi:10.1038/s41377-022-00820-w. https://resolver.caltech.edu/CaltechAUTHORS:20220512-346447700

[img] PDF - Published Version
Creative Commons Attribution.

3MB
[img] PDF - Supplemental Material
Creative Commons Attribution.

1MB
[img] Video (MPEG) - Supplemental Material
Creative Commons Attribution.

9MB
[img] Video (MPEG) - Supplemental Material
Creative Commons Attribution.

19MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20220512-346447700

Abstract

A superresolution imaging approach that localizes very small targets, such as red blood cells or droplets of injected photoacoustic dye, has significantly improved spatial resolution in various biological and medical imaging modalities. However, this superior spatial resolution is achieved by sacrificing temporal resolution because many raw image frames, each containing the localization target, must be superimposed to form a sufficiently sampled high-density superresolution image. Here, we demonstrate a computational strategy based on deep neural networks (DNNs) to reconstruct high-density superresolution images from far fewer raw image frames. The localization strategy can be applied for both 3D label-free localization optical-resolution photoacoustic microscopy (OR-PAM) and 2D labeled localization photoacoustic computed tomography (PACT). For the former, the required number of raw volumetric frames is reduced from tens to fewer than ten. For the latter, the required number of raw 2D frames is reduced by 12 fold. Therefore, our proposed method has simultaneously improved temporal (via the DNN) and spatial (via the localization method) resolutions in both label-free microscopy and labeled tomography. Deep-learning powered localization PA imaging can potentially provide a practical tool in preclinical and clinical studies requiring fast temporal and fine spatial resolutions.


Item Type:Article
Related URLs:
URLURL TypeDescription
https://doi.org/10.1038/s41377-022-00820-wDOIArticle
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9095876/PubMed CentralArticle
ORCID:
AuthorORCID
Kim, Gyuwon0000-0002-7259-099X
Li, Lei0000-0001-6164-2646
Zhang, Pengfei0000-0003-2674-3825
Kim, Jin Young0000-0002-7375-8328
Kim, Yeonggeun0000-0002-3962-8183
Kim, Hyung Ham0000-0002-6353-5550
Wang, Lihong V.0000-0001-9783-4383
Lee, Seungchul0000-0002-1034-1410
Kim, Chulhong0000-0001-7249-1257
Additional Information:© The Author(s) 2022. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Received 21 November 2021; Revised 24 April 2022; Accepted 26 April 2022; Published 12 May 2022. J.K. would like to thank Joongho Ahn for fruitful discussions about the operating software of the OR-PAM system. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (2020R1A6A1A03047902), supported by National R&D Program through the NRF funded by the Ministry of Science and ICT (MSIT) (2020M3H2A1078045), supported by the NRF grant funded by the Korea government MSIT (No. NRF-2019R1A2C2006269 and No. 2020R1C1C1013549). This work was partly supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government MSIT (No. 2019-0-01906, Artificial Intelligence Graduate School Program (POSTECH)) and Korea Evaluation Institute of Industrial Technology (KEIT) grant funded by the Ministry of Trade, industry and Energy (MOTIE). This work was also supported by the Korea Medical Device Development Fund grant funded by the MOTIE (9991007019, KMDF_PR_20200901_0008). It was also supported by the BK21 Four project. Data availability: All data are available within the Article and Supplementary Files or available from the authors upon request. Contributions: C.K. and J.K. conceived and designed the study. J.K., J.Y.K., Y.K., and L.L. constructed the imaging systems. J.K., L.L., and P.Z. contributed to managing the imaging systems for collecting the raw data. J. K., G.K., and L.L. developed the image processing algorithms and DL networks. J.K. and G.K. contributed to perform the training of the DNNs and analyze the results. C.K. supervised the entire project. J.K., G.K., and L.L. prepared the figures and wrote the manuscript under the guidance of C.K., L.V.W., and S.L. All authors contributed to the critical reading and writing of the manuscript. Conflict of interest: C. Kim and J.Y. Kim have financial interests in Opticho and the OR-PAM system (i.e., OptichoM) was supported by Opticho. L.V. Wang has financial interests in Microphotoacoustics, Inc., CalPACT, LLC, and Union Photoacoustic Technologies, Ltd., which did not support this work.
Funders:
Funding AgencyGrant Number
National Research Foundation of Korea2020R1A6A1A03047902
National Research Foundation of Korea2020M3H2A1078045
National Research Foundation of Korea2019R1A2C2006269
National Research Foundation of Korea2020R1C1C1013549
Institute of Information & Communications Technology Planning & Evaluation (IITP)2019-0-01906
Artificial Intelligence Graduate School Program (POSTECH)UNSPECIFIED
Korea Evaluation Institute of Industrial Technology (KEIT)UNSPECIFIED
Korea Medical Device Development FundUNSPECIFIED
Ministry of Trade, Industry and Energy (Korea)9991007019
Ministry of Trade, Industry and Energy (Korea)KMDF_PR_20200901_0008
Korean Government ProjectBK21
Subject Keywords:Imaging and sensing; Photoacoustics
PubMed Central ID:PMC9095876
DOI:10.1038/s41377-022-00820-w
Record Number:CaltechAUTHORS:20220512-346447700
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20220512-346447700
Official Citation:Kim, J., Kim, G., Li, L. et al. Deep learning acceleration of multiscale superresolution localization photoacoustic imaging. Light Sci Appl 11, 131 (2022). https://doi.org/10.1038/s41377-022-00820-w
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:114700
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:12 May 2022 18:57
Last Modified:13 May 2022 16:09

Repository Staff Only: item control page