three-dimensional imaging using an optical microscope

7
JOURNALOFMODERNOPTICS, 1990, VOL . 37, NO . 11,1887-1893 Three-dimensionalimagingusinganopticalmicroscope D .F .ABBOTT,P .D .KEARNEYandK .A .NUGENT SchoolofPhysics,TheUniversityofMelbourne, Parkville,Victoria3052,Australia Abstract. Thispaperconsidersthreebasicproblemsinthedevelopmentofa three-dimensionalwide-fieldopticalmicroscope :three-dimensionalimagepro- cessing,characterizationofthefocaldistributionandefficientcalculationof diffractedlightfields .Resultsarepresentedconcerningallthreematters . 1 .Introduction AttheUniversityofMelbournewearedevelopingaprogrammeofinvestigation intothree-dimensional(3D)opticalmicroscopy . Theaimofthisworkisto investigateanddevelopsystemsthatwillbeabletoextractthe3Dorientation, distributionandsizeoffeatureswithin,forexample,abiological cell .Weare pursuingbasicstudiesintobothconfocalandconventionalmicroscopysystems ; in thispaper,wewillconcentrateonthelatter . Althoughthedevelopmentofconfocalmicroscopyhasgeneratedconsiderable interestinthebiologicalcommunitywithitsabilitytoproducestriking optical sectionimages,theseimagestypicallyrequiretheintroductionofsomefluorescent dyeintothesystem .Thephysiologicalconsequencesofthepresenceofthesedyes neednotbenegligible .Inaddition,insomecases offluorescentstaining, photobleachingofthesampleisaseriousproblem .Therelativelylongexposure timesofascanningmicroscopemakeitunsuitableforimagingthistypeofobject . 3Dimagingwithawide-fieldmicroscopeisthusanattractivealternative . Theconventionaltransmissionmicroscopemaybeusedfor3Dimagingby scanningthefocusofthemicroscopeandrecordingtheimageateachfocalposition [1,2] .Computerprocessingmaythenbeusedtoproducea3Drepresentationofthe object[2,3] .Giventhecomplexnatureoftheinteractionofthe illuminating radiationwiththeobject,theimmediateaimof3Dmicroscopymightbest be describedasestimatingthe3Dimagethatwouldresultfromusingamicroscopeable tocollectalltheradiationemanatingfromanincoherentlyilluminatedobject,using datafromamorelimitedinstrument .Connectingthis3Dimagedistributionwith thephysicalobjectisahigherorderproblem . Naturally,thefirststepinconstructingsuchamicroscopeisthecareful characterizationofthe3Dpropertiesofthelenses .Onemustalsoestablish the limitationsontheimageprocessingthatwillbepossible .Inthenextthreesections wewillsummarizetheelementsofoureffortsinthisdirection . 2 . Three-dimensionalimageprocessing Anumberofpapershavebeenpublishedconcerningthenatureofthe informationcontentina3Dfocalscanimage .Inthecaseofanincoherently illuminatedobject,thespaceofselectedspatialfrequencies(the3Dapertureofthe system)hasadoughnutshapewithzerolongitudinalbandwidthatzerolateralspatial frequency .Theresponseofthesystemisnotuniformwithinthisaperture and 0950-0340/90$3 . 00©1990Taylor&FrancisLtd .

Upload: ka

Post on 16-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Three-dimensional Imaging Using an Optical Microscope

JOURNAL OF MODERN OPTICS, 1990, VOL . 37, NO . 11, 1887-1893

Three-dimensional imaging using an optical microscope

D . F. ABBOTT, P . D. KEARNEY and K. A. NUGENT

School of Physics, The University of Melbourne,Parkville, Victoria 3052, Australia

Abstract. This paper considers three basic problems in the development of athree-dimensional wide-field optical microscope : three-dimensional image pro-cessing, characterization of the focal distribution and efficient calculation ofdiffracted light fields. Results are presented concerning all three matters .

1. IntroductionAt the University of Melbourne we are developing a programme of investigation

into three-dimensional (3D) optical microscopy . The aim of this work is toinvestigate and develop systems that will be able to extract the 3D orientation,distribution and size of features within, for example, a biological cell. We arepursuing basic studies into both confocal and conventional microscopy systems ; inthis paper, we will concentrate on the latter .

Although the development of confocal microscopy has generated considerableinterest in the biological community with its ability to produce striking opticalsection images, these images typically require the introduction of some fluorescentdye into the system . The physiological consequences of the presence of these dyesneed not be negligible . In addition, in some cases of fluorescent staining,photobleaching of the sample is a serious problem . The relatively long exposuretimes of a scanning microscope make it unsuitable for imaging this type of object .3D imaging with a wide-field microscope is thus an attractive alternative .

The conventional transmission microscope may be used for 3D imaging byscanning the focus of the microscope and recording the image at each focal position[1, 2] . Computer processing may then be used to produce a 3D representation of theobject [2, 3] . Given the complex nature of the interaction of the illuminatingradiation with the object, the immediate aim of 3D microscopy might best bedescribed as estimating the 3D image that would result from using a microscope ableto collect all the radiation emanating from an incoherently illuminated object, usingdata from a more limited instrument . Connecting this 3D image distribution withthe physical object is a higher order problem .

Naturally, the first step in constructing such a microscope is the carefulcharacterization of the 3D properties of the lenses . One must also establish thelimitations on the image processing that will be possible . In the next three sectionswe will summarize the elements of our efforts in this direction .

2 . Three-dimensional image processingA number of papers have been published concerning the nature of the

information content in a 3D focal scan image . In the case of an incoherentlyilluminated object, the space of selected spatial frequencies (the 3D aperture of thesystem) has a doughnut shape with zero longitudinal bandwidth at zero lateral spatialfrequency. The response of the system is not uniform within this aperture and

0950-0340/90 $3 .00 © 1990 Taylor & Francis Ltd .

Page 2: Three-dimensional Imaging Using an Optical Microscope

1888

D. F. Abbott et al .

reduces at higher lateral spatial frequencies . In previous work [2, 4] reconstruction ofthe 3D image has been attempted by correcting for the non-uniform response of thesystem within the aperture and ignoring the frequencies outside it; this approach is adirect extension of 2D image processing. However it is not immediately clearwhether this approach to image reconstruction is valid, given the rather complexshape of the 3D aperture .

Clearly, deconvolution as described will produce a correct image provided that itcontains only those spatial frequencies that are within the 3D aperture [5] . Sufficientinformation to fully reconstruct an incoherently radiating object from a 3D imagevolume is available provided

dmrlm< (# 1/27c) tan (a/2),

where dm is the maximum diameter of the object normal to the optical axis, tlm is themaximum.spatial frequency in the object along the optical axis, a is the cone angle ofthe system point spread function, and fl, is the first zero of J 1 (q) .

A direct consequence of this result is that we can define a longitudinal resolution6. for the system,

SZ=dm/tan (a/2),

which refers to resolution along the optic axis in the same way that the Rayleighcriterion applies to lateral resolution . Conventionally, longitudinal resolution hasbeen defined in terms of the maximum bandwidth of the 3D aperture or,equivalently, the longitudinal extent of the focused spot . However, this definitionhas only a rather limited relevance to practical 3D imaging. The criterion we haveobtained is a strict requirement for the production of a 3D image limited only bydiffraction effects .

An interesting consequence of this analysis is that it can be seen that betterlongitudinal resolution is possible for smaller objects than for larger ones . This is adirect consequence of the greater longitudinal bandwidth of higher lateral spatialfrequencies .

The above observation has important ramifications . In general, biological objectswill be very complex and the investigator will usually be interested in thedistribution of a particular feature . If that feature is small or contains sharp edgesthen it will be able to be delineated with recourse only to high spatial frequencies .Thus, combined with preliminary processing, it should be possible to reliablyinvestigate the 3D architecture of particular features . These matters will be thesubject of further work .

3. Aberration measurementIn order to apply computational reconstruction techniques reliably to micro-

scopic images it is crucial that the properties of the imaging system be fullycharacterized . Thus it is important that the aberrations in the system be determined .Further, it appears that the out-of-focus light distributions, which determine thenature of the image degradation in a set of focal scan data, are highly sensitive to thepresence of aberrations . We have thus set about an attempt to determine theaberration coefficients from the out-of-focus light distribution .

Aberrations in an optical system imply a distortion of the wavefront leaving thesystem from the ideal . Aberrations may also be adequately defined using geometricoptics with no reference to phase or wavefront . For a given focal length, it is easy to

Page 3: Three-dimensional Imaging Using an Optical Microscope

Optical microscope 3D imaging

1889

show that the distribution of deviations of the rays leaving the exit pupil may bedetermined by the intensity distribution in an out-of-focus spot, given geometricaloptics . This observation suggests that it should also be possible to deduce aberrationcoefficients from the out-of-focus spots in situations where diffraction is important .This proposition was first investigated using computer simulations of on-axis spotsand it was found that the spherical aberration coefficient should indeed be able to beaccurately determined .

The experiment shown in figure 1 was constructed to test the technique . Thespatially filtered laser beam was incident upon a high numerical microscopeobjective (NA=0.85) to produce an effective point source for the lower power(NA=0.35) objective under test. Light distributions either side of focus wererecorded by a CCD camera/frame grabber combination to be processed on apersonal computer. The camera, rather than the objective, was scanned to record thedefocused distributions . In a 3D optical microscope, this has the advantage that therest of the optical system remains constant, as well as the fact that scanning throughthe magnified image requires less positional accuracy . The two sets of data thusobtained are shown in figure 2 (a) . These data were radially averaged and theresulting profiles fitted in the computer with the third and fifth order sphericalaberration coefficients as free parameters. It was also necessary to allow two otherparameters of the system to vary, to allow for the finite accuracy to which, forexample, the focal length, object and image distances, or location of the principalplanes of the system can be determined . The data and the fits are shown in figure2 (b) . These results are preliminary only : the fits are quite good, however they wereachieved independently and while the aberration coefficients in each case agree, oneof the other free parameters is inconsistent . It is thought that the differences arisefrom some apodisation in the objective [6], and seventh order aberration . Theseeffects will be considered in future work .

4. Evaluation of diffraction integralsAccurate evaluation of the 3D imaging performance for a microscope requires a

detailed knowledge of the light distribution at the focus . The calculation of focaldistributions using vector diffraction theory is extremely time consuming . Instead,scalar diffraction theory is usually applied to such problems . Fresnel diffractiontheory is commonly employed .

At both high and low numerical apertures, differences have been observedbetween experimentally measured focal distributions and those predicted by theFresnel theory . Under these conditions, it has been found that Kirchhoff diffractiontheory gives the best accuracy [7] . (The Fresnel theory is derived from the Kirchhoffdiffraction formula under a set of approximations .)

He-Nelaser

coverSUP

o D + Z

Spatial

NA .85 NA .35

CCD Camerafilter

Objective lenses

Figure 1 . Optical system used to characterize a NA=0 . 35 microscope objective . Notice thecamera is moved to obtain defocused images . The in-focus point is at z=0 .

Page 4: Three-dimensional Imaging Using an Optical Microscope

1890

0 .15

aNN

E 0 .05`oz

0 .00

0 .15

aG)N

E 0 .050z

0 .00

D. F. Abbott et al .

0 4 8 12 16 20 24 28 32 0

Pixel No .(a)

4 8 12 16 20 24 28 32

0 4 8 12 16 20 24 28 32 0

Pixel No .(b)

Figure 2 . (a) Defocused 2D point spread functions measured using the apparatus of figure 1 .(b) The dotted lines are the best fits obtained to the measured distributions . The fits forthe different values of z were calculated independently, and are not entirely consistent .

4 8 12 16 20 24 28 32

We aim to develop fast numerical algorithms for the calculation of focaldistributions to higher accuracy than is allowed under the Fresnel approach . Thefirst step is to reduce the time required for evaluation of the exact Kirchhoffdiffraction integral . The resulting code may then be used to check the accuracy ofalternative techniques .

Page 5: Three-dimensional Imaging Using an Optical Microscope

where

u(xb,Yb, zb) = JAperture

g(x, y) exp {ik[Rb(x, y) - Ra (x, y)]} dxdy,

1

1 Zb z a

1 zb zaAX,y) = 2RaRb 27[ (Rb Ra) ~. (R b + Ra)]'

Optical microscope 3D imaging

1891

We now describe a method, based on a change in coordinate system andintegration variables, which allows more rapid evaluation of the Kirchhoff integral toany specified accuracy .

Two diffraction geometries exist, one where light diverging from a source passesthrough an aperture, the other, when an aperture limits a beam converging towards afocus. (See figure 3 .)

The introduction of a sign convention for the various displacements allows asingle formula to be applied in both cases . Kirchhoff diffraction theory gives thefollowing expression for the field at the point Pb ;

Ri=[(x-xt)2+(y-Yt)Z+zi ] 1/z ,

(a)

l=a, b.

The essential problem with the numerical quadrature of equation (1) is that thecomplex exponential tends to oscillate rapidly with a change in an integration

Pa

divergingsphericalwaves

(1)

(b)Figure 3 . (a) Diffraction geometry for diverging incident beam . (b) Alternative geometry,

for converging incident beam .

Page 6: Three-dimensional Imaging Using an Optical Microscope

1892

D. F. Abbott et al .

variable. Further, the integral is difficult to simplify analytically due to thenonlinearity of the exponent . Our aim is to apply a coordinate transformation andperform a change of integration variables in order to linearize the exponent .

We first construct a new coordinate system for which the behaviour of thecomplex exponent, under a variation in one of the coordinates, takes a simpler form .

The natural coordinate system to use is one for which the origin lies on the pointwhere a line through the points Pa and P b intercepts the aperture plane . It isconvenient then to rotate the system so that the projection of this line is along the xaxis. This coordinate system forces points of stationary phase to fall at the origin or atinfinity .

We now represent the quantity R b-Ra by the symbol A . The choice ofcoordinate system guarantees the value of 8A/8R is finite except, possibly, at theorigin . This allows the polar form of (1)

u(xb, yb , zb)=J Aperture

g(R, 0) exp {ik[R b(R, 0)-Ra(R, 0)]}RdRd9,

(2)

to be cast into the form

u(xb, Yb, Zb)= J02 A2t J d a(R,0) exp [ikA] R dA d9 .

(3)e

The possible pole introduced at the origin is countered by the zero provided by R .The main disadvantage in working with this form of the integral is the additional

computation required to determine the limits of integration . For apertures withconvex boundaries, as is usually the case, this does not pose a significant problem .(For example, in the case of a circular aperture, the limits are just the two roots of aquadratic equation) .

The inner radial integral is then of the form

I(6) = JA2f (A) exp (ikA) dA .

(4)Al

A range of techniques have been developed for the numerical quadrature of theabove class of integral . One recent method, based on the fast Fourier transform, isdescribed by Press and Teukolsky [8] .

The outer angular integral is also amenable to a FFT approach since theintegrand is periodic over the interval of integration .

In instances where accuracy is the prime consideration we may apply adaptivequadrature schemes which return an approximation to the integral satisfying user-provided error limits .

The QUADPACK [9] integration package contains routines tailored to a widevariety of ID integrals . Two routines of specific interest are the QAGS andQFOUR. QAGS is a general purpose integrator which employs an adaptive Gauss-Kronrod scheme to provide an integral estimate satisfying user-provided absoluteand relative accuracy bounds. This routine is most suited to evaluation of the angularintegral. QFOUR is an adaptive composite Gauss-Kronrod, Clenshaw-Curtisroutine designed to handle oscillatory integrands similar to that in equation (4) .

The above technique has been tested, using the QUADPACK routines, under awide variety of diffraction geometries . Typically an increase in calculation speed oftwo orders of magnitude is obtained over direct numerical integration of equation(2), for the same accuracy .

Page 7: Three-dimensional Imaging Using an Optical Microscope

Optical microscope 3D imaging

1893

This approach allows us to calculate the diffracted field at the focus of a lens for anunpolarized beam very rapidly on a personal computer . Thus we may test thecharacterization of our optics without recourse to paraxial optics . The techniquefurther provides a reliable check on the accuracy of alternative schemes fordetermining focal distributions .

5 . ConclusionWe have presented some results from a programme of investigation into three-

dimensional optical microscopy . In this programme we are concentrating on thecareful characterization of the instrumentation and the analysis procedures . In thefuture we will use the concepts and techniques that have been developed to constructan operational three-dimensional microscope .

AcknowledgmentsThis work was supported by a grant from the Australian Research Council and

by a donation from HBH Technological Industries Pty Ltd . P.D.K. acknowledgessupport from the Matthaei bequest and D.F.A. acknowledges support from aCommonwealth Postgraduate Research Award .

References[1] AGARD, D . A., and SEDAT, J ., 1983, Nature, 302, 676 .[2] ERHARDT, A., ZINSER, G ., KoMITOWSKI, D ., and BILLE, J ., 1985, Appl. Optics, 24,194 .[3] AGARD, D . A., 1984, Ann . Rev. Biophys. Bioengng, 13,191 .[4] STREIBL, N ., 1985, J . opt . Soc. Am . A., 2,121 .[5] NUGENT, K . A., 1988, Optics Commun., 69, 15 .[6] SHEPPARD, C . J. R ., 1988, Appl . Optics, 27, 4782.[7] STAMNES, J . J ., 1986, Waves in Focal Regions (Bristol: Adam Hilger) .[8] PRESS, W . H ., and TEUKOLSKY, S . A., 1989, Computers in Physics, Jan/Feb, 91 .[9] PIESSENS, R., DE DONCKER-KAPENGA, E ., UBERHUBER, C. W ., and KAHANER, D. K ., 1983,

QUADPACK-A Subroutine Package for Automatic Integration . Springer Series inComputational Mathematics, Vol . 1 (Berlin : Springer) .