Supplemental Document
Efficient, gigapixel-scale, aberration-free whole
slide scanner using angular ptychographic
imaging with closed-form solution: supplement
S
HI
Z
HAO
,
†
H
AOWEN
Z
HOU
,
†
S
IYU
(S
TEVEN
) L
IN
,
R
UIZHI
C
AO
,
AND
C
HANGHUEI
Y
ANG
∗
Department of Electrical Engineering, California Institute of Technology, Pasadena, California 91125, USA
†
These authors contributed equally to this work
∗
chyang@caltech.edu
This supplement published with Optica Publishing Group on 6 September 2024 by The Authors
under the terms of the Creative Commons Attribution 4.0 License in the format provided by the
authors and unedited. Further distribution of this work must maintain attribution to the author(s)
and the published article’s title, journal citation, and DOI.
Supplement DOI: https://doi.org/10.6084/m9.figshare.26868904
Parent Article DOI: https://doi.org/10.1364/BOE.538148
E
FFICIENT
,
GIGAPIXEL
-
SCALE
,
ABERRATION
-
FREE WHOLE SLIDE SCANNER
USING ANGULAR PTYCHOGRAPHIC IMAGING WITH CLOSED
-
FORM SOLUTION
:
SUPPLEMENTAL DOCUMENT
1. Extended Depth of Field of WSI-APIC
The WSI-APIC method provided a robust and efficient solution to high-resolution imaging,
demonstrating exceptional tolerance to aberrations, including defocus aberration. To quantify
its depth of field, we simulated Siemens star resolution target at various z-positions during
imaging. The aberration correction results with WSI-APIC and corresponding bright-field
illumination are shown in Fig. S1. The first and forth row in Fig. S1. presents simulated
brightfield images at different defocus distances; the second and fifth row shows the amplitude
obtained from the WSI-APIC reconstruction; and the third and sixth row displays the pupil
phase retrieved from the WSI-APIC reconstruction indicating defocus aberrations. From this
comparison, it is clear that WSI-APIC achieves an extended depth of field over 60
μm
.
Fig. S1. Simulated brightfield images and APIC reconstruction results across various different
defocus ranges. The first row displays brightfield imaging at different defocus ranges, second
row presents the intensity obtained from APIC reconstruction, and the third row shows the
aberration obtained from APIC reconstruction.
We also validated this experimentally by imaging a Siemens star sample at various z-
positions. Initially, we adjusted the sample position to find focal plane, then we collected the
data and performed reconstructions. Next, all the LEDs were illuminated, and a piece of lens
tissue was placed over the sample to provide incoherent illumination for brightfield imaging.
We then moved the sample along the z-direction at 5 μm intervals and performed the same
brightfield imaging and WSI-APIC imaging at each position. As depicted in Fig. S2., the
brightfield image became noticeably blurred at a defocus of ±20 μm. However, the WSI-APIC
reconstructed image maintained a resolution close to that of the in-focus image. Beyond a
defocus range of 20 μm, the resolution began to slightly degrade. The system’s extended depth
of field reached 40 μm, which is adequate to avoid z-scanning for most 2D pathology slides. In
comparison, a brightfield microscope achieving the same lateral resolution only had a depth of
field of
휆
푁
퐴
2
=
522
푛푚
0.345
2
=
4.4
μm
. Leveraging the robust aberration correction capabilities of
APIC, our WSI-APIC system has achieved a 9-fold enhancement in depth of field, markedly
surpassing the performance of FPM-based systems, which typically exhibit an improvement of
only 3- to 5-fold [1–3].
We note that there is a gap between the simulated depth of field estimation and experimental
outcomes. Potential factors for this discrepancy could include system inherence aberrations,
image noise, errors introduced from inaccurate position calibration, and the limited coherence
of the light source. With better system calibration and enhanced noise reduction, the depth of
field in our WSI-APIC can be further extended.
Fig. S2. Brightfield images and APIC reconstruction results across various different defocus
ranges. The first row displays brightfield imaging at different defocus ranges, second row
presents the intensity obtained from APIC reconstruction, and the third row shows the aberration
obtained from APIC reconstruction.
2. Algorithm Complexity of WSI-APIC
To thoroughly evaluate the practical applicability of our GPU-accelerated WSI-APIC algorithm
on different hardware conditions, we conducted an analysis of its computational complexity.
We define the patch size of input images as
푛
pixels, the number of NA-matching
measurements as
푝
, and the number of darkfield measurements as
푚
. The time complexity of
the algorithm is predominantly determined by the process of solving the linear equation system
derived from the convolution operation during darkfield reconstruction (Eq. (13)). For this
purpose, we employed the solve function from the PyTorch linalg library [4], which utilizes
LU decomposition. The time complexity for this step is approximately
푂(
푙
3
)
, where
푙
represents the size of the convolution matrix
퐶
, which is linearly proportional to the total
number of pixels in the measurements. Given this linear equation system needs to be solved for
each darkfield measurement, the overall time complexity of the algorithm scales as
푂(푚
푛
6
)
.
The convolution matrix
퐶
is also the primary consumer of memory resources, resulting in a
space complexity of
푂(
푛
4
)
. It is worth noting that in the WSI-APIC algorithm, after solving
each linear equation, the convolution matrix
퐶
is cleared from the GPU memory to free up space.
3. Auto-stitching Algorithm Robustness
In our auto-stitching algorithm, we applied the Scale Invariant Feature Transform (SIFT)
algorithm, one of the most widely used techniques for feature detection and image stitching, to
detect distinctive key points within overlapping regions and to estimate the geometric
transformation required to align different FOVs [5]. In our experiments, as long as there is
sufficient sample information (not blank background regions) in the overlap area, the SIFT
algorithm accurately identifies the drift between two images, enabling precise stitching. This
makes it applicable to different tissue types. For H&E slides with varying staining patterns,
SIFT demonstrates high robustness because its keypoint detection and descriptor generation
processes rely primarily on local gradient information rather than absolute intensity.
References
1. G. Zheng, R. Horstmeyer, and C. Yang, "Wide-field, high-resolution Fourier
ptychographic microscopy," Nature Photon
7
, 739–745 (2013).
2. M. Liang and C. Yang, "Implementation of free-space Fourier Ptychography with near
maximum system numerical aperture," Opt. Express
30
, 20321 (2022).
3. H. Zhou, C. Shen, M. Liang, and C. Yang, "Analysis of postreconstruction digital
refocusing in Fourier ptychographic microscopy," Opt. Eng.
61
, (2022).
4. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N.
Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Yang, Z. DeVito, M. Raison, A.
Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, "PyTorch: An
Imperative Style, High-Performance Deep Learning Library," (2019).
5. D. G. Lowe, "Object recognition from local scale-invariant features," in
Proceedings of
the Seventh IEEE International Conference on Computer Vision
(IEEE, 1999), pp. 1150–
1157 vol.2.