Wavefront image sensor chip
Xiquan Cui,
1,*
Jian Ren,
1
Guillermo J. Tearney,
2
and Changhuei Yang
1
1
Department of Electrical Engineering,
2
Bioengineering, California Institute of Technology, Pasadena, CA
91125,USA
2
Harvard Medical School and the Wellman Center for Photomedicine, Massachusetts General Hospital, 50 Blossom
St., Boston, MA 02114, USA
*xiquan@caltech.edu
Abstract:
We report the implementation of an image sensor chip, termed
wavefront image sensor chip (WIS), that can measure both
intensity/amplitude and phase front variations of a light wave separately and
quantitatively. By monitoring the tightly confined transmitted light spots
through a circular aperture grid in a high Fresnel number regime, we can
measure both intensity and phase front variations with a high sampling
density (11 μ m) and high sensitivity (the sensitivity of normalized phase
gradient measurement is 0.1 mrad under the typical working condition). By
using WIS in a standard microscope, we can collect both bright-field
(transmitted light intensity) and normalized phase gradient images. Our
experiments further demonstrate that the normalized phase gradient images
of polystyrene microspheres, unstained and stained starfish embryos, and
strongly birefringent potato starch granules are improved versions of their
corresponding differential interference contrast (DIC) microscope images in
that they are artifact-free and quantitative. Besides phase microscopy, WIS
can benefit machine recognition, object ranging, and texture assessment for
a variety of applications.
©2010 Optical Society of America
OCIS codes:
(110.1220) Apertures; (130.0130) Integrated optics; (010.7350) Wave-front
sensing;
References and links
1. S. L. Stanley, Jr., “Amoebiasis,” Lancet
361
(9362), 1025–1034 (2003).
2. M. M. Haglund, M. S. Berger, and D. W. Hochman, “Enhanced optical imaging of human gliomas and tumor
margins,” Neurosurgery
38
(2), 308–317 (1996).
3. J. Van Blerkom, H. Bell, and G. Henry, “The occurrence, recognition and developmental fate of pseudo-
multipronuclear eggs after in-vitro fertilization of human oocytes,” Hum. Reprod.
2
(3), 217–225 (1987).
4. R. J. Sommer, and P. W. Sternberg, “Changes of induction and competence during the evolution of vulva
development in nematodes,” Science
265
(5168), 114–118 (1994).
5. G. Nomarski, “New theory of image formation in differential interference microscopy,” J. Opt. Soc. Am.
59
,
1524 (1969).
6. F. Zernike, “Phase contrast, a new method for the microsopic observation of transparent objects,” Physica
9
(7),
686–698 (1942).
7. R. Hoffman, and L. Gross, “The modulation contrast microscope,” Nature
254
(5501), 586–588 (1975).
8. B. C. Albensi, E. V. Ilkanich, G. Dini, and D. Janigro, “Elements of Scientific Visualization in Basic
Neuroscience Research,” Bioscience
54
(12), 1127–1137 (2004).
9. P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb, and C. Depeursinge, “Digital
holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living
cells with subwavelength axial accuracy,” Opt. Lett.
30
(5), 468–470 (2005).
10. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase
microscopy,” Nat. Methods
4
(9), 717–719 (2007).
11. M. V. Sarunic, S. Weinberg, and J. A. Izatt, “Full-field swept-source phase microscopy,” Opt. Lett.
31
(10),
1462–1464 (2006).
12. A. Barty, K. A. Nugent, D. Paganin, and A. Roberts, “Quantitative optical phase microscopy,” Opt. Lett.
23
(11),
817–819 (1998).
13. X. Q. Cui, M. Lew, and C. H. Yang, “Quantitative differential interference contrast microscopy based on
structured-aperture interference,” Appl. Phys. Lett.
93
(9), 091113 (2008).
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16685
14. B. C. Platt, and R. Shack, “History and principles of Shack-Hartmann wavefront sensing,” J. Refract. Surg.
17
(5), S573–S577 (2001).
15. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J.
Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,”
Science
313
(5793), 1642–1645 (2006).
16. M. J. Rust, M. Bates, and X. W. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction
microscopy (STORM),” Nat. Methods
3
(10), 793–796 (2006).
17. R. V. Shack, and B. C. Platt, “Production and use of a lenticular hartmann screen,” J. Opt. Soc. Am.
61
, 656
(1971).
18. Y. Carmon, and E. N. Ribak, “Phase retrieval by demodulation of a Hartmann-Shack sensor,” Opt. Commun.
215
(4-6), 285–288 (2003).
19. http://www.olympusmicro.com/primer/anatomy/kohler.html.
20. M. R. Arnison, K. G. Larkin, C. J. R. Sheppard, N. I. Smith, and C. J. Cogswell, “Linear phase imaging using
differential interference contrast microscopy,” J. Microsc.
214
(1), 7–12 (2004).
21. S. B. Mehta, and C. J. R. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric
illumination-based differential phase contrast,” Opt. Lett.
34
(13), 1924–1926 (2009).
22. G. Popescu, T. Ikeda, R. R. Dasari, and M. S. Feld, “Diffraction phase microscopy for quantifying cell structure
and dynamics,” Opt. Lett.
31
(6), 775–777 (2006).
23. J. G. Wu, Z. Yaqoob, X. Heng, L. M. Lee, X. Q. Cui, and C. H. Yang, “Full field phase imaging using a
harmonically matched diffraction grating pair based homodyne quadrature interferometer,” Appl. Phys. Lett.
90
(15), 151123 (2007).
24. M. J. Booth, M. A. A. Neil, R. Juskaitis, and T. Wilson, “Adaptive aberration correction in a confocal
microscope,” Proc. Natl. Acad. Sci. U.S.A.
99
(9), 5788–5792 (2002).
25. M. Rueckel, J. A. Mack-Bucher, and W. Denk, “Adaptive wavefront correction in two-photon microscopy using
coherence-gated wavefront sensing,” Proc. Natl. Acad. Sci. U.S.A.
103
(46), 17137–17142 (2006).
1. Introduction
A light wave contains two primary sets of characteristics – intensity/amplitude variations and
phase front variations. At present, all commercial image sensor chips are designed to operate
much like our retinas and are only responsive to the intensity variations of the light wave.
However, the phase front of the light wave carries additional information that may not be
present in the intensity variations. For example, many biological specimens are effectively
transparent and only modulate the phase front of light transmitted through them. Optical phase
microscopes are greatly valued for their ability to render contrast based on refractive index
variations in unstained biological samples, and are useful in biomedical applications where
minimal sample preparation procedures are required. Such applications can include field
analysis of bloodborne and waterborne pathogens [1] where cost considerations and ease-of-
use are important, and analysis of biopsy sections to determine tumor margins during surgical
procedures where rapid processing is critical [2]. The phase microscopes are also critical in
scenarios where staining is undesirable or simply not an option. Such applications include
examinations of oocytes and embryos during in-vitro fertilization procedures [3], and
longitudinal imaging of live cells or organisms [4].
DIC microscopes [5] and, to a lesser extent, phase contrast microscopes [6] and Hoffman
phase microscopes [7] have been the primary phase microscopes of choice for the past five
decades. However, the phase information is mixed with the intensity information for these
phase microscopy techniques. This limitation introduces ambiguities in the rendered images
and, additionally, prevents straightforward quantitative phase analysis. Moreover, these phase
microscopes require special optical components that have to be switched in and out during
operation. Additionally, DIC images of birefringent samples, such as muscle tissues and
collagen matrices, can have significant artifacts as the DIC microscope uses polarization in its
phase-imaging strategy [8]. The relative high cost of such systems also prevents the broader
use of such phase microscopes. In recent years, numerous novel phase microscopy techniques
have been developed [9–11]. However, the need for laser sources and the relatively high level
of sophistication have thus far impeded the broader adoption of these techniques as a
convenient and viable replacement for the DIC microscopes. Quantitative optical phase [12]
can also be calculated by collecting 2 or 3 successive images of the sample around its focal
plane. However, this technique requires the physical actuation of the camera to be placed in
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16686
distinct positions, and is therefore intrinsically limited in speed. Finally, these systems
typically use relatively complex and bulky optical arrangements to translate the phase front
variations into the intensity variations that are then detectable by commercial image sensor
chips.
Based on our proof-of-concept experiment [13], we believe that the implementation of a
sensor chip that is capable of phase front sensing can provide a simpler and more sensible
solution. Such a sensor chip can substitute for the conventional camera in a standard
microscope and provide a more direct means for performing phase imaging. If such a chip can
be fabricated at the foundry level, it can significantly lower the cost of phase microscopy
systems and allow greater phase imaging access to the broader biomedical community.
In this paper, we report the implementation of such an image sensor chip, termed
wavefront image sensor chip (WIS), that is capable of simultaneously measuring both the
intensity and the phase front variations of an incident light field. The basic WIS design is
closely related to the Hartmann sieve [14] – the predecessor of Hartmann Shack sensors. Here
we incorporate a grid of apertures directly on a sensor chip at close proximity to the sensor
pixels. Unlike in a typical Hartmann sieve design, the WIS is able to achieve a high grid
density by operating in a high Fresnel number regime.
In the Section 2, we will describe the implementation and characterization of the first fully
integrated WIS prototype device. In the Section 3, we will demonstrate its capability for
converting a standard microscope into a wavefront microscope (WM). In the Section 4, we
will report the use of the WM for imaging polystyrene microspheres, unstained and stained
starfish embryos, and strongly birefringent potato starch granules. In the Section 5, we will
discuss the challenges and opportunities of the further development of the WIS. In the Section
6, we will conclude by briefly discussing the other applications of the WIS beyond enabling
wavefront microscopy.
2. Wavefront Image Sensor Chip
2.1. Principle
The WIS consists of a 2D array of circular apertures defined on top of a metal coated image
sensor chip (e.g. a charge-coupled device (CCD) or complementary metal-oxide-
semiconductor (CMOS) chip); a transparent spacer separates the apertures from the sensor
pixels (Fig. 1(a), 1(b)). The coordinate systems we use in this paper are shown in Fig. 1(b).
When a plane light wave is incident upon the aperture array, the transmission through each
aperture forms a projection spot on the sensor pixels underneath. When a light wave with an
unknown wavefront impinges upon the aperture array, the center of each projection spot will
shift according to the phase gradient of the light wave over its corresponding aperture.
Mathematically, this shift in the s direction can be expressed as:
( , )
( , )
( , )
,
2
x
PhasGrad
x y
x y
H
s
x y
H
n
n
x
θ
φ
λ
π
∂
≈
=
∂
(1)
when
( , )
PhasGrad
s
x y
H
<<
, where H is the distance from the aperture to the image sensor
chip,
( , )
x
x y
θ
is the wavelength-independent normalized phase gradient of the light wave in
the x direction over the aperture
( , )
x y
,
λ
is the wavelength of the light wave, n is the
refractive index of the spacer material, and
( , ) /
x y
x
φ
∂
∂
is the wavelength-dependent phase
gradient in the x direction over the aperture (See Fig. 1(a), 1(b) for coordinate references)
[13]. Corresponding expressions for the light wave in the y direction can be written in a
similar fashion. The close relationship between
θ
x
and
( , ) /
x y
x
φ
∂
∂
, and our subsequent
choice of using
θ
x
deserve some elaboration. The normalized phase gradient
θ
x
(and
θ
y
) can be
appreciated as a wavelength-independent measure of the angle at which the incoming light
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16687
impinges upon the aperture. In other words,
θ
x
(and
θ
y
) measures the directionality of the
incoming light wave. As the light source employed in these experiments is a broadband
halogen lamp in a standard microscope, the choice of
θ
x
(and
θ
y
) for subsequent discussions is
a more appropriate one.
Fig. 1. Wavefront image sensor chip. a, Schematic of the device under a vertical plane
illumination. The WIS apertures (white circles) are defined on the metal (gray) coated 2D
CMOS image sensor chip (light gray grid), the transparent spacer separates the apertures away
from the image sensor chip, and the aperture projections (red circles) are evenly distributed on
the image sensor chip. b, Change of the transmission and shift of the aperture projections under
an unknown light wave. c, Simulation of the diffraction (in SU8 resin) of a 6
m diameter WIS
aperture defined on a perfect electric conductor (PEC) layer illuminated by a halogen lamp. d,
The experimental data showing the self-focusing effect of a WIS aperture on an Al coated glass
cover slip. The insets are the cross-sections of the aperture diffraction perpendicular to the z
axis.
In addition to providing a measure of
θ
x
(and
θ
y
), each projection spot also provides a
measurement of the local intensity of the light wave over its corresponding aperture. We
obtain this value by summing the total image sensor signal associated with the projection spot
(Fig. 1(b)). Therefore, the WIS is able to retrieve the intensity and phase information of the
unknown light wave separately by simply evaluating two independent aspects of each
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16688
projection spot. We assign a grid of N × N pixels underneath each aperture to measure the
transmission and shift of the projection spot. It has been proven in other studies [15,16] that
estimating the shift of the projection spot with subpixel precision can be achieved with
excellent precision even when the number of pixels involved (N) is small. If an image sensor
chip with MN × MN pixels is used, we can then create a WIS with M × M apertures, or
effectively generate a light wave image of M × M pixels. Throughout this article, we will refer
to the pixels on the image sensor as sensor pixels, and the smallest image point in the rendered
light wave image as image pixels.
WIS has close parallels to the Shack-Hartmann sensor [17] and its predecessor, the
Hartmann sieve [14]. The Hartmann sieve, which consists of a macro-scale aperture array
arranged above an image sensor grid, was first proposed as a system for examining the optical
aberrations of a telescope. However, the broadened light spots due to diffraction in such a
system significantly limit sensitive detection and necessitate wide separation between the
apertures, which in turn also limits the number of useful image pixels. The incorporation of
the lens array into Shack-Hartmann sensors allowed the formation of tighter light spots.
Nevertheless, the relatively large lens dimensions (typically on the order of 100 microns), the
associated low image pixel numbers and the general assembly difficulties have limited such
sensors to phase measurements of relatively simple wavefronts in astronomy, metrology, and
ophthalmology [14].
Our technology differs from these conventional methods in that we recognize that the
projection spot from an aperture placed in appropriately close proximity to an image sensor
grid operates in a high Fresnel number optical regime (0.86 in our case) (more specifically,
2
2
(
) / (4
) [1.6 (6
) ] / [4 28
0.6
] 0.86
F
nd
H
m
m
m
λ
=
= ×
×
×
=
) and can therefore be tightly
confined (Fig. 1(c)). In other words, light transmitted through an aperture would actually
focus itself near the aperture before spreading (diffraction); we design our device such that the
image sensor grid is located at the plane where this self-focusing occurs. Additionally, the
lateral shift of this projection spot is still responsive to the phase front gradient of the incident
light wave. These two facts enable us to create a simple-to-implement, highly compact (over a
sensor area of 3.08 mm × 3.85 mm), high-density (11 μ m spacing between apertures), high
image pixel count (280 × 350 image pixels) and highly sensitive WIS chip.
2.2. Self-focusing effect of the WIS apertures in the high Fresnel number regime
We performed both a 3D and broadband finite-difference time-domain (FDTD) simulation
(CST Microwave Studio from CST of America, Inc.) to determine the distribution of the light
transmitted through a WIS aperture. The aperture diameter was set at 6 μ m, and the refractive
index of the spacer material was set at 1.6 (Fig. 1(c)). To reduce the complexity of the
simulation, a 150 nm thick perfect electric conductor (PEC) film was modeled in place of the
Al layer we deposited on our WIS chip. As we used a broadband light source - a halogen lamp
- for all experiments in this article, our simulation was performed over the entire spectrum
range of the halogen lamp (473 - 713 nm) at a wavelength interval of 20 nm. We summed the
spectrally weighted power flow distributions to approximate the real light projection of the
WIS aperture. As we can see from Fig. 1(c), the light projection shrinks to a tightly confined
spot (in the high Fresnel number regime) before expanding in an approximately linear fashion
(as predicted by considering diffraction in the low Fresnel number regime).
Next, we implemented an experiment to quantitatively measure the actual projection light
spot of a WIS aperture. First we punched a 6 μ m aperture on an Al coated (150 nm thick)
glass cover slip (refractive index of 1.5) with a focused ion beam (FIB) machine. Then we
illuminated the aperture with the halogen lamp, and used a microscope with an oil immersion
100 × objective lens (N.A. = 1.3) to image the projection spot at different axial displacement
(Fig. 2). The result is plotted in Fig. 1(d). We can see that the spot’s width (full width at half
maximum - FWHM) reached a minimum (measured width = 3.8 μ m) at an axial displacement
of H = 18 μ m – 37% smaller than the aperture diameter itself. This spot size confinement is
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16689
surprisingly robust; we found that the spot diameter remained below 5
m (FWHM) for H
ranging between 4 and 34
m.
Fig. 2. Measuring the diffraction of the WIS aperture under the illumination of a halogen lamp.
A 6 μ m aperture was first etched on an Al coated (150 nm thick) glass cover slip (refractive
index of 1.5), and then illuminated by a halogen lamp (the central wavelength was 0.6 μm and
the FWHM of the spectrum was 0.2 μ m). The cross-sections of the aperture diffraction at
different z plane was imaged by a microscope with an oil (refractive index of 1.5) immersed
100 × objective (N.A. = 1.3) by moving the focal plane of the microscope along z axis with a
micrometer with the interval of 2 μ m.
We note that these simulation and experiment results share similar trends but do differ to
some extent. We believe that the discrepancies are attributable to the aperture profile
difference (the experimentally milled apertures tend to be rounder around the edges and
texturally rougher than the simulation ideals), the limitation of the finite grid density
associated with the simulation and the inadequacies of the simulation’s spectral range
coverage. Our WIS prototype was designed and implemented based upon our experimental
findings.
2.3. Fabrication
Our high-density WIS (Fig. 3(a), 3(b)) prototype was fabricated with a commercially
available CMOS image sensor chip (MT9P031I12STM from Aptina Imaging) as the
substrate. There are 1944 × 2592 pixels of size 2.2 μ m on the sensor. We removed its glass
window to gain access to the surface of the sensor. Next we planarized the surface of the
sensor die with a 10 μ m thick layer of SU8 resin, and then coated it with a 150 nm thick layer
of Al to mask the sensor from light. The SU8 layer served two functions. First, the SU8 layer
nullified the optical properties of the lens on top of each sensor pixel. These tiny and
relatively low-quality lenses are ubiquitous in the current generation of CMOS sensors. They
serve to more efficiently funnel light onto the light sensitive region of the sensor pixels. Their
presence should have minimal impact on our WIS prototype and, in fact, they should improve
light collection efficiency and boost our signals. However, to make our initial WIS
demonstration clear and unambiguous, we decided to nullify the lenses with the SU8 layer.
The SU8 also served as a spacer between the Al layer and the sensor pixels. A stack of
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16690
proprietary materials in the sensor functioned as an additional spacer as well. Next, we used
photolithography to create a 2D aperture array (280 × 350 apertures, 6 μ m aperture diameter
and 11 μ m aperture-to-aperture spacing) onto the Al film (Fig. 3(a)).
Fig. 3. Prototypes of the WIS and WM. a, Apertures with 6
m diameter and 11
m spacing
defined on the Al coated WIS. b, Fully integrated WIS is the size of a dime. c, Converting a
standard optical microscope into a WM by simply adding the WIS onto the camera port.
We assigned a dedicated grid of 5 × 5 sensor pixels underneath each aperture to detect the
associated projection spot. For all experiments discussed in this article, the total signal
accumulation time was 1.0 second. This integration time directly relates to our phase gradient
sensitivity and can be reduced with a trade-off in decreased sensitivity. The typical light
intensity on the sensor is 9.2 μ W/cm
2
. The summation of the signals detected by these pixels
is a measure of the light intensity on the aperture. The lateral shift of the projection spot is
related to the normalized phase gradient of the incident light over the aperture. We employed
the algorithm described in Section 2.4. to determine the lateral shift with excellent sub-pixel
accuracy. This algorithm is a modified version of the Fourier-demodulation algorithm for
wavefront sensing [18]. By assuming the effective refractive index of the whole stack of the
SU8 and proprietary materials is 1.6, we estimated that the distance H from the aperture to the
actual photosensitive area of the sensor pixels was 28 ± 1 μ m (Section 2.5.). This
configuration generated smoothly focused aperture projections on the image sensor chips, and
enabled good performance of our WIS prototype. Based on our experimental data in (Section
2.2.), we determined that the projection spots have the diameter of 4.5 μ m (FWHM) - 25%
narrower than the parent apertures. The slight mismatch between our achieved and the optimal
spot size is attributable to the fact that our fabricated effective SU8 spacer thickness was
larger than expected. Nevertheless, we expected this WIS prototype to be able to perform
well.
Our calibration experiments (Section 2.5.) established that under the typical working
condition we can determine the center of the projection spot with a precision of 1.8 nm (equal
to 8 × 10
−
4
sensor pixel width); this translates to a local normalized phase gradient sensitivity
of 0.1 mrad. Our experiments also show that we can measure the local normalized phase
gradient linearly over a range of +/
−
15 mrad. This range is adequate in addressing our
microscopy application needs. If desired, our WIS prototype is capable of measuring
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16691
normalized phase gradients over a broader range as long as we collect a more extended set of
calibration data.
2.4. Cyclic algorithm for estimating the center of each projection spot
The centroid method is the most straightforward algorithm for determining the center of each
projection spot. However, because the centroid method assigns significant weights to the more
noise-corrupted data from dark pixels, it is intrinsically an unstable position estimator. The
Fourier-demodulation algorithm, recently developed by Ribak’s group [18], for dealing with
light spots arranged in an approximately regular grid is intrinsically more robust. We
developed a modified version, termed cyclic algorithm, that is suited for our purpose. This
algorithm uses cyclic and uni-norm complex weights. To clearly illustrate its principle, we
will first discuss the cyclic algorithm for the 1D case in the
s
direction. Suppose the
distribution of a light spot on the image sensor chip is
( )
I s
and concentrated in a
window
[
/ 2, / 2]
T
T
−
, then we can define a complex number
0
s
ɶ
for its initial position,
/ 2
0
/ 2
2
( ) exp(
) .
T
T
s
I s
i
s ds
T
π
−
=
∫
ɶ
(2)
If the center of the light spot shifts
s
, the complex number
1
s
ɶ
for the second position will
be
/ 2
1
/ 2
/ 2
/ 2
/ 2
/ 2
0
2
(
) exp(
)
2
( ) exp(
(
))
2
2
exp(
)
( ) exp(
)
2
exp(
) .
T
T
T
s
T
s
T
s
T
s
s
I s
s
i
s ds
T
I u
i
u
s du
T
i
s
I u
i
u du
T
T
i
s s
T
π
π
π
π
π
−
−
− −
−
− −
=
−
=
+
=
≈
∫
∫
∫
ɶ
ɶ
(3)
The last approximation is true when
s
T
<<
, which is usually the case for the wavefront
microscopy. We can see that
1
s
ɶ
is nothing but
0
s
ɶ
rotated by an angle
2
s
T
π
in the complex
plane, so the shift of the light spot can be easily calculated from the above two complex
numbers,
1
0
[
( )
( )].
2
T
s
angle s
angle s
π
=
−
ɶ
ɶ
(4)
For the discrete data from the 2D image sensor pixels, we assigned a dedicated grid of 5 ×
5 sensor pixels (the horizontal and vertical indexes of the pixels are
m
=
−
2,
−
1, 0, 1, 2 and
n
=
−
2,
−
1, 0, 1, 2 respectively) underneath each aperture to measure the shift of the light spot,
and we replaced the integrals in Eq. (2)-(4) with summations,
2
2
0
2 ,
2
2
2
1
2,
2
1
0
2
( ) exp(
)
5
2
(
) exp(
)
5
11
[
( )
( )].
2
mn
m
n
mn
m
n
s
I
s
i
n
s
I
s
s
i
n
m
s
angle s
angle s
π
π
π
=− =−
=− =−
=
=
−
=
−
∑ ∑
∑ ∑
ɶ
ɶ
ɶ
ɶ
(5)
There might be some bias introduced by this simple replacement. However, this bias can
be corrected with careful calibrations (Section 2.5.).
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16692
2.5. Calibration experiment for the normalized phase gradient response of our WIS
Fig. 4. Calibration experiment for the normalized phase gradient measurement of the WIS. a, b,
The experimental setup under a vertical illumination and a tilted illumination which imposes a
specific normalized phase gradient
θ
x
or
θ
y
with respect to the WIS. c, d, the normalized phase
gradient responses of the WIS in both the x and y directions. Each data point is the average
normalized phase gradient measurement of the 350 apertures from the central row of our WIS;
each error bar corresponds to the standard deviation among them.
In order to test the linearity and sensitivity of our WIS, we introduced a specific normalized
phase gradient to all WIS apertures (Fig. 4(a), 4(b)) by illuminating them with a plane halogen
light at a corresponding incident angle. Figure 4(c), 4(d) show good linearity of the
normalized phase gradient responses in both the x and y directions. Each data point is the
average normalized phase gradient measurement of the 350 apertures from the central row of
our WIS; each error bar corresponds to the standard deviation among them. This normalized
phase gradient variation between these apertures is ~0.5 mrad.
From the slopes of the calibration curves, we can estimate the distance from the WIS
apertures to the photo-sensitive areas of the sensor pixels. They are 27.2 μ m and 28.0 μm in
the x and y directions respectively, assuming the effective refractive index of the whole stack
of the SU8 and proprietary materials is 1.6. The discrepancy between these two numbers
might be due to the slight aperture-pixel misalignment in the x and y directions.
From the fluctuation of each aperture projection spot over time, we estimate that the
sensitivity of our normalized phase gradient measurement is better than 0.1 mrad under the
typical working condition - 1.0 second total signal accumulation time and 9.2 μ W/cm
2
light
intensity on the WIS.
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16693
2.6. Influence of the normalized intensity gradient to the measurement of the normalized
phase gradient by the WIS
Fig. 5. Normalized intensity gradient can also induce a shift to each aperture projection spot of
the WIS.
Besides the normalized phase gradient of the light wave, the normalized intensity gradient can
also induce a shift to each aperture projection spot of the WIS. For example, if at the center of
a WIS aperture (with a radius of
a
) the intensity gradient of the light wave is
0
/
I
x
∂ ∂
and the
intensity is
0
I
(Fig. 5), the center of the projection spot will be shifted by approximately:
2
2
2
2
2
2
2
2
2
0
0
0
0
0
0
,
4
a
a
t
a
a
t
IntenGrad
a
a
t
a
a
t
I
I
s I
s
dsdt
a
x
x
s
I
I
I
s
dsdt
x
−
− − −
−
− − −
∂
∂
+
∂
∂
=
=
∂
+
∂
∫ ∫
∫ ∫
(6)
assuming the intensity change is slow over the WIS aperture, i.e.
2
2
0
0
/
/
I
x
a
I
x
∂ ∂ >> ∂ ∂
. For
the sake of brevity, we define
0
0
/
/
I
x
I
∂ ∂
as the normalized intensity gradient over the WIS
aperture under consideration. The shift of each projection spot is proportional to the
normalized intensity gradient over its corresponding aperture, and it can be reduced by
decreasing the size of the aperture. In addition, since the projection spot spreads out
symmetrically as we increase the distance
H
from the aperture to the CMOS image sensor, the
normalized intensity gradient induced shift is constant with respect to the distance
H
.
Therefore, we can also reduce the influence of the normalized intensity gradient to the
measurement of the normalized phase gradient by increasing the distance
H
2
0
_
0
.
4
IntenGrad
X
I
na
x
HI
θ
∂
∂
=
(7)
3. Wavefront microscopy setup
By employing our WIS chip in place of the conventional camera in a standard bright-field
microscope, we can transform the standard microscope into a WM that is capable of
simultaneously acquiring bright-field and quantitative normalized phase gradient images. The
operation and Köhler illumination associated with a standard microscope can be found in Ref
[19].
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16694
To demonstrate that this is indeed a viable camera-based device for converting a standard
microscope into a WM, we attached it to an Olympus BX 51 microscope via its camera port
(Fig. 3(c)). The microscope was outfitted with a standard halogen microscope light source.
We also equipped the microscope with push-in and pull-out DIC prisms and polarizers, so that
the microscope could be easily reconfigured into a DIC microscope for comparison. We used
a CMOS image sensor chip of size 9.9 μ m pixels (MT9V403C12STM from Micron
technology, Inc.) to record the DIC images. This allowed for a fair comparison of the image
quality as the effective image pixel size of our WIS device is 11
m. We note that such a
microscope operating with a 40 × , N.A. = 0.75 objective has a resolution of 0.49 μ m. As the
microscope magnifies the image by the magnification factor, the projected image should have
a resolution of 20 μ m. Since our WIS prototype has an effective image pixel size of 11
m
(~2 times the image resolution – Nyquist criterion consideration), its use with this particular
microscope will allow the microscope to accomplish a resolution of 0.55 μ m (only 10% off its
specific resolution). In general, our WIS prototype performs even better with higher
magnification objectives. For example, a 60 × , N.A. = 0.9 objective and a 100 × , N.A. = 1.3
objective based microscope would be able to achieve their specific resolution of 0.41 μ m and
0.28 μ m, respectively, with our WIS prototype. For a 20 × , N.A. = 0.5 objective based
microscope, we note that the images collected in this particular microscope configuration have
a resolution of 2.2 μ m instead of the specified microscopy resolution of 1.2 μ m because the
image can only be sampled at a sub-Nyquist rate by the WIS prototype. This problem can be
resolved by designing the WIS prototype with a smaller aperture-to-aperture pitch.
4. Results
4.1. Polystyrene microspheres
Fig. 6. Images of polystyrene microspheres. a, b, Bright-field and DIC images. c, d, e,
Intensity, normalized phase gradient images of the WM in the y and x directions. The white
arrows represent the directions of the contrast enhancement.
For our first set of experiments, we placed the sample of 20 μ m polystyrene microspheres
suspended in water (Polysciences, Inc. CAT# 18329) over a microscope slide, and then
covered it with a cover slip. A 40 × objective lens (N.A. = 0.75) and a condenser lens (N.A. =
0.5) were used during imaging. Figure 6(a), 6(b) are images of the microspheres acquired
separately by the bright-field and DIC microscope configurations. The shear direction of DIC
imaging is in the y direction throughout the imaging experiments in this article.
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16695
Figure 6(c)-6(e) are the images of the WM, the intensity image (Fig. 6(c)), the normalized
phase gradient image in the y direction (Fig. 6(d)), and the normalized phase gradient image
in the x direction (Fig. 6(e)) of the microspheres respectively, that are rendered from a single
data acquisition process. We can see that the intensity image of the WM is consistent with the
bright-field image, and the normalized phase gradient image of the WM in the y direction is
consistent with the DIC image. However, the normalized phase gradient image of the WM in
the x direction contains phase information orthogonal to the DIC image and the directional
normalized phase gradient image of the WM in the y direction.
As we discussed in the Section 2.6., the normalized intensity gradient can contribute to the
measurement of the normalized phase gradient by the WIS. We have developed a method to
remove the component of the normalized intensity gradient from the normalized phase
gradient images in this article (Fig. 6, 8-10). Here we will use the WM imaging of the
microspheres in the x direction as an example to illustrate this procedure. First, we use the
intensity image of the WM to calculate the normalized intensity gradient in the x direction
over each WIS aperture. Then, we use the Eq. (7) to calculate the component of the
normalized intensity gradient
_
IntenGrad
X
θ
(Fig. 7(b)). Lastly, we subtract it from the raw
measurement of the normalized phase gradient by the WIS
_
WIS
X
θ
(Fig. 7(a)) to obtain the
corrected normalized phase gradient
_
PhasGrad
X
θ
(Fig. 7(c)),
_
_
_
.
PhasGrad
X
WIS
X
IntenGrad
X
θ
θ
θ
=
−
(8)
As we can see from the comparison of the line profiles from the above three images
(Fig. 7(d)), the component of the normalized intensity gradient is large at the edges of the
microspheres, but is moderate at the most part of the sample.
Fig. 7. Removing the component of the normalized intensity gradient from the normalized
phase gradient image of the WIS in the x direction. (a) Normalized phase gradient image
measured by the WIS. (b) Normalized intensity gradient induced image. (c) Corrected
normalized phase gradient image. (d) Comparison among the line profiles from the above three
images.
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16696
4.3 Unstained starfish embryo in the late gastrula stage
To demonstrate the potential utility of the WM in the biological imaging, we used our
prototype to image an unstained starfish embryo in the late gastrula stage. The sample was
fixed by 10% formalin, and sandwiched between a microscope slide and a cover slip. A 20 ×
objective lens (N.A. = 0.5) and a condenser lens (N.A. = 0.35) were used during imaging.
Figure 8(a), 8(b) are the acquired bright-field and DIC images of the starfish embryo. Because
the sample was not stained, the DIC image provided much better contrast than the bright-field
image. Figure 8(c)-8(e) are the images of the WM. We can see that the intensity image of the
WM is consistent with the bright-field image, and the normalized phase gradient image of the
WM in y direction is consistent with the DIC image. However, the normalized phase gradient
image of the WM in x direction contains phase information orthogonal to the DIC image and
the normalized phase gradient image of the WM in y direction.
Fig. 8. (
Media 1
) Images of an unstained starfish embryo in the late gastrula stage. a, b, Bright-
field and DIC images. c, d, e, Intensity, normalized phase gradient images of the WM in the y
and x directions. f, Phase-gradient-vector magnitude image. g, h, Normalized phase gradient
images of the WM in the 1
35 and 45
directions. The white arrows represent the directions
of the contrast enhancement.
α
: gastrocoel.
The phase of a light wave is a fixed scalar potential function, so our two orthogonal
normalized phase gradient
θ
x
and
θ
y
images are a complete set of the phase gradient
information for the sample. They can be represented in other forms that are more amenable to
the specific needs of doctors or bio-scientists. For example, the magnitude of the phase-
gradient-vector,
2
2
x
y
θ θ θ
= +
, highlights the boundaries of the sample (Fig. 8(f)) where the
phase changes dramatically. Its map can be very useful for applications such as automatic
segmentation and counting for cells or other sub-cellular organelles. This map is also an
#128420
- $15.00
USD
Received
13
May
2010;
revised
9 Jul
2010;
accepted
20
Jul
2010;
published
23
Jul
2010
(C)
2010
OSA
2 August
2010
/ Vol.
18,
No.
16
/ OPTICS
EXPRESS
16697