digital cameras part digital camera technologies for ... · light microscopy, digital cameras, ccd,...
Embed Size (px)
each pixel of the sensor. Real experimentalparameters, such as dye concentration, numer-ical aperture (NA), and magnification, are allconsidered to determine this photon through-put.
In Parts 3 and 4 of the series, the analysis ofradiometric throughput will be combined withan analysis of system noise to generate signal-to-noise profiles as a function of exposure timeto help select which sensor would be best fora particular application.
Detection ArmAny fluorescence imaging system whichexcites a sample at one wavelength anddetects emission at another longer wave-length will be comprised of an illuminationarm and detection arm. In many instances,both optical arms will share some optical com-ponents, such as the objective lens in an epi-fluorescence imaging system. The optical para-meters of the illumination arm affect theamount of excitation light incident on thesample and the optical parameters of thedetection arm will affect imaging resolutionand the final signal-to-noise ratio of thedetected image.
Figure 1 shows a schematic of an epi-fluo-rescence microscope optical configuration.The illumination beam (blue) is reflected bythe dichroic filter toward the objective lensand emerges from the objective lens to excitethe sample. In this example, the objective lensis telecentric in the sample space which meansthat the magnification of the object will stayrelatively constant with defocus. The sample,which has been excited by optical illumination,emits light (red) which is collected by theobjective lens, filtered in collimated space witha bandpass filter, and re-imaged onto a sensorusing a tube lens. Clearly, the objective lens is
B IOGRAPHYAn accomplished entre-preneur, Dr YashvinderSabharwal has almost 20years of experience inthe application of opti-cal technology to biolog-ical and medical imagingsystem development. He has a BS in opticsfrom the University of Rochester and MSand PhD in optical sciences from the Univer-sity of Arizona. He was a co-founder of Opti-cal Insights LLC where he was responsiblefor the development of imaging platformsfor fluorescence microscopy, high-through-put screening, and high-content screening.Following the sale of Optical Insights, hemanaged product marketing for scientificdigital cameras at Photometrics. Since 2006,Dr Sabharwal has worked as a technical andbusiness consultant with various companiesand entrepreneurial organizations. In 2010,he moved to Austin and is currently the COOfor Xeris Pharmaceuticals.
ABSTRACTThis article develops a methodology forunderstanding the balance between imag-ing and radiometric properties inmicroscopy systems where digital sensors(CCD, EMCCD, and scientific-grade CMOS)are utilized. The methodology demon-strates why smaller pixel cameras providebetter sampling at low magnifications andwhy these cameras are less efficient at col-lecting light than medium and larger pixelcameras. The article also explores how dif-ferent optical configurations can improvelight throughput while maintaining ade-quate sampling with smaller pixel cameras.
KEYWORDSlight microscopy, digital cameras, CCD,EMCCD, CMOS, imaging, life sciences
AUTHOR DETA I L SYashvinder Sabharwal, Solexis Advisors LLC,6103 Cherrylawn Circle, Austin, TX 78723,USAMob: +1 512 534 8340Email: [email protected] Joubert, Applications Scientist,Photometrics and QImaging3440 East Britannia DriveTucson, AZ 85706-5006, USATel: +1 520 889 9933Email [email protected]
Microscopy and Analysis July 2011
DIGITAL CAMERAS PART2
I N TRODUCT IONCCD and EMCCD technologies have becomethe standard in bio-imaging over the years andeach has been well characterized in numerousapplications. More recently, the scientific-grade CMOS has been developed for scientificbio-imaging and is finding its way into differ-ent bio-imaging applications. CMOS technol-ogy has had issues in the past, but recentimprovements indicate room for its use withinthe array of current imaging applications. Thisfour-part series aims to compare these majorcamera technologies and discuss a methodol-ogy for selecting the appropriate sensor tech-nology for a given application.
Part 1 of this series  introduced the differ-ent camera technologies, describing how theyfunction and some of the key differencesbetween them. In Part 2, we now look to com-pare the performance of typical CCD, EMCCD,and CMOS sensors evaluating both imagingparameters and radiometric (light through-put) parameters. Although comparative innature, the purpose of this article and thisseries is not to state that one technology is bet-ter than the others across the spectrum of bio-imaging applications, but rather to discuss thecritical optical parameters which should beconsidered when selecting a camera for a par-ticular application.
This article attempts to simplify the analysisby first separating the metrics intoimaging/sampling and radiometric, i.e. lightthroughput, considerations. The discussionbegins by introducing the detection arm of afluorescence imaging system and evaluatingthe standard pixel sizes of the three sensortechnologies with respect to resolution andsampling. Complementing this analysis is a dis-cussion of the methodology for calculating thenumber of photons that will be incident at
Digital Camera Technologies for ScientificBio-Imaging. Part 2: Sampling and SignalYashvinder Sabharwal, 1 James Joubert 2 and Deepak Sharma 2
1. Solexis Advisors LLC, Austin, TX, USA 2. Photometrics and QImaging, Tucson, AZ, USA
Figure 1: Schematic of the optical configuration of an epi-fluorescence microscope.
MICROSCOPY AND ANALYSIS JULY 2011 1
the most important element in this analysisand the critical parameters are the numericalaperture (NA) of the objective lens and themagnification (M) of objective lens.
IMAG ING ANALYS I S
Spatial Resolution and SamplingThe primary metric of concern in epi-illumina-tion imaging systems is the lateral resolution.Lateral resolution refers to the resolution inthe plane of the sample being imaged. As thevalue of the resolution parameter decreases,smaller features can be resolved by the imag-ing system. In an epi-illumination system, lat-eral resolution at the sample is a function oftwo parameters, the wavelength � of theemitted light and the NA of the objective lens.Since the NA incorporates the index of refrac-tion of the immersion medium of the objectivelens, we do not have to consider this variableseparately. The most familiar resolution limitsare the Abbe limit (Ra) and the Rayleigh limit(Rr) . Both of these resolution limits arebased on optical diffraction theory, and theydiffer slightly in their proportionality constant.These limits assume that the detection armimaging system is free of aberrations. Whilewe know this not to be true, it provides alower limit, and it does simplify the calcula-tions for this analysis:
At object ® Ra = 0.5 � / NA (1)
At object ® Rr = 0.61 � / NA (2)
If these resolution limits are multiplied by themagnification (M) of the objective, one cancalculate the minimum resolvable feature sizeat the location of the image where the CCD,EMCCD, or scientific-grade CMOS sensor isplaced. Equation 3 shows the resolution at thedetector Rd using the Rayleigh limit:
At detector ® Rd = 0.61 M � / NA (3)
Figure 2 provides a graphical representationof the resolution limit, in this case the Rayleighlimit, where two adjacent spots are imaged astwo Airy disks (due to diffraction) and areresolved when the center of one Airy diskoverlaps the first minimum of the adjacentAiry disk.
Any image sensing technology, whether it isa CCD or film or your eye, is comprised of dis-crete imaging elements. In the case of CCDs,you have pixels; with film you have grains ofsilver halide; and your eye has rods and cones.When an image is recorded using a discreteimaging system, the image is automaticallysampled. If the parameters of the discreteimaging system are not properly matched tothe resolution limit of your imaging system,you can introduce errors into the sampledimage known as aliasing. Aliasing leads toerrors in intensity distribution, so care must betaken to ensure the validity of any quantita-tive conclusions being drawn from images notcorrectly sampled. An example of an aliasingartifact is shown in Figure 3.
Figure 4 shows the combined intensity pro-
file of the two Airy disk patterns located suchthat they are separated by the resolution limit.In order to detect this profile using a scientificcamera, you would need a pixel at each peakand one at the minimum between the twopeaks. The minimum is half way between thetwo peaks, leading to the requirement thatyour pixel size needs to be at least one half ofthe minimum resolution.
If your pixel size is smaller such that you havethree or four pixels per resolution feature,then you will be oversampling and eliminateerrors due to aliasing. Table 1 provides the res-olution limit for various microscope objectivesand how typical cameras relate from a sam-pling perspective due to their pixel size. Thepixel sizes have been divided up into threesizes: small (3.5 µm), medium (6.5 µm), andlarge (14 µm). In terms of camera comparison,the small pixel would correspond most closelyto scientific-grade CMOS, the medium pixelwould match the closest to scientific-gradeCMOS and CCD, and the large pixel would bestrepresent EMCCD. The pink highlighted cellsdo not meet the minimum sampling require-ments based on the Rayleigh criterion. Greenhighlighted cells oversample sufficiently tosuppress errors due to aliasing.
By using a 0.5� coupler in tandem with the
various objectives, it is possible to maintain asufficient sampling frequency with smaller pix-els while allowing a higher throughput oflight. Table 2 provides the resolution limitcomparison when using the various micro-scope objectives combined with a 0.5� cou-pler. The red highlighted cells do not meet thesampling requirements. Green highlightedcells oversample sufficiently to suppress errorsdue to aliasing.
Depth of FieldThe depth of field (Df) of a microscope is thedistance that the sample can be shifted alongthe optical axis and still remain in focus. Con-ceptually, this corresponds to the distance thatthe sample can be defocused with the result-ing blur remaining within the resolution limitdefined in the previous section. The Df is pro-portional to the index of the sample and thewavelength of light and it is inversely propor-tional to the square of the NA of the objective:
Df = 2n�/NA2 (4)
For most widefield microscopy, the fluorescentemission point sources of interest are within aportion of the depth of field. As such, otherfluorescence signal not focused upon is con-sidered background and not part of the truesignal of interest. The equations describedhere are consistent with such use-case scenar-ios. For thicker samples, where a significantamount of fluorescence traverses the entiredepth of field and confocal techniques mustinstead be employed, another analyticalapproach and a different set of equationsshould be utilized.
RAD IOMETR I C ANALYS I SA critical parameter for any imaging experi-ment is the signal-to-noise ratio (SNR). As theSNR of an image increases, the quality of theacquired image will also increase. As a firststep in the determination of the SNR, this sec-tion of the article provides a methodology forestimating the expected signal in the form ofphotons detected by the sensor per pixel (�).Discussion of noise and SNR for the three cam-era technologies will be set forth in Part 3 ofthis series.
Optical ThroughputFigure 5 depicts a simple optical system aswould be expected in the detection arm of amicroscope. The source is emitting photons,
Figure 2: Adjacent spots on a sensor separated by the Rayleighlimit.
MICROSCOPY AND ANALYSIS JULY 20112
Figure 3: Aliasing of a brick wall by undersampling. Credits: This picture was provided by Cburnett on Wikipedia(http://upload.wikimedia.org/wikipedia/commons/f/fb/Moire_pattern_of_bricks_small.jpg).
DIGITAL CAMERAS PART2
which are collected by the objective lens. Thetotal number of photons emitted by thesource and collected by the objective lens isdescribed by Equation 5:
� = L As �o (5)
where L is the radiance of the source; As is thearea of the source; and �o is the solid angle atthe object. The solid angle describes the 3Dcone into which light is emitted and collectedby the objective lens. Equation 6 shows therelationship between solid angle and the NAof the objective lens:
�o = � NA2 (6)
The product of the source area and the solidangle is known as the optical throughput.Based on geometrical considerations, it can beshown that the optical throughput (�) at thedetector is equivalent to the optical through-put at the source, as shown in Figure 5 andEquation 7. The same analysis can be done atthe pixel level of the sensor to calculate theexpected signal at each pixel of the sensor(�p).
Table 3 shows how the optical throughputchanges with the parameters of the micro-scope objective. It is important to note that asthe NA increases, the throughput increases.However, with commercially available objec-tive lenses, increases in NA are often accom-panied by increases in magnification, whichreduces throughput:
� = As�o = Ad�i (7)
Detected PhotonsThe only unknown in Equation 5 is the radi-ance (L) of the fluorescent source. The radi-ance is defined as the photons emitted per sec-ond per unit area per unit solid angle .Unlike a light source, where radiometric para-meters like lumens are measured and known,the radiance of the fluorescent sample has tobe calculated based on other known parame-ters. In fluorescence microscopy, the sample iscomprised of molecules which are emitting acertain number of photon counts per secondper molecule (cpsm). This parameter of the flu-orescent sample is known as molecular bright-ness �  which is a function of the absorptioncross-section at the excitation wavelength,the quantum yield � of the fluorophore, andthe excitation intensity I. The product of themolecular brightness and the dye concentra-tion provides a metric on the number ofphotons emitted per second per unit area(assuming a thin sample). Dividing this valueby the solid angle of emission gives an esti-mate on the radiance per unit volume. Eachfluorescent molecule is emitting photons intoa full spherical solid angle. The solid angle of asphere is 4�. When we consider how manyphotons are being collected by the optical sys-tem, we must consider the solid angle of theobjective lens, which is related to its NA. Bring-ing all this together, we can get an expressionfor the expected signal per pixel per secondbased on known parameters of the sample
Figure 4: Sampling requirement oftwo pixels per minimumresolution unit (Rayleighcriterion).
MICROSCOPY AND ANALYSIS JULY 2011 3
Figure 5: Basic schematic of radiometric parameters in an optical imaging system.
Table 2: Sampling factors for different cameras with 0.5X coupler (
and the detection arm of the microscope:
� = (�ex) � Iex (8)
L = �/4� (9)
� = L As p �o = � NA2
Adp / (4 M2) (10)
Since this is an epi-fluorescence imaging sys-tem, we cannot forget that as the NA of theobjective lens changes, the intensity of theexcitation light incident on the sample willalso change. Equation 10 shows that thedetected number of photons changes as thesquare of the NA. Similarly, the incident num-ber of photons and, hence, the molecularbrightness will change as the square of the NA.Therefore, for epi-fluorescence systems, thenumber of photons reaching the sensor is ulti-mately proportional to the 4th power of theNA of the objective .
Table 4 shows the number of photons col-lected per pixel per second for typical pixelsizes of various commercially available sensors.The calculations assume a sample with a mol-ecular brightness of 3300 cpsm (10� objec-tive), having a concentration of 200 nM in thesample. Clearly, those sensors with larger pix-els will trade off higher collection efficiencywith lower resolution. Table 4 shows that, fora given magnification, increases in NA will leadto increases in photons collected because thesolid angle is increasing both for illuminationand detection. However, as the magnificationof the objective increases, the total detectedphotons will not increase as quickly. This is thecase because the increase in solid angle withincrease in NA is offset by a reduction in theeffective pixel area at the sample.
Table 5 shows how the detected photons perpixel increase when a 0.5� coupler is intro-duced into the detection arm. The criticalpoint of Table 5 is that for the 3.5 µm pixelcamera, a 0.5� coupler can be introduced tosignificantly increase the throughput whilemaintaining adequate sampling and not sacri-ficing resolution.
Further comparison of the 3.5 µm pixel cam-era using a 0.5� coupler with a 6.5 µm cameraand no coupler shows that the 3.5 µm/0.5�coupler combination is preferable. Table 6compares the two configurations for thosefour objectives where the sampling rate is suf-ficient to avoid aliasing. From this table, wecan conclude that the 3.5 µm pixel camerawith a 0.5� coupler will collect around 10%more light while maintaining adequate sam-pling. However, the addition of a coupler is notalways desired as it can impart unwantedaberrations to the image reducing resolution.Alternatively, a high NA objective with a lowermagnification could be used in conjunctionwith a small pixel camera to obtain a high sig-nal level without under sampling. For exam-ple, using a 3.5 µm pixel camera with a 40� 1.3NA objective would provide Nyquist samplingand comparable or even greater photons perpixel when the sample is detected with a 60�1.3 NA objective using a 6.5 µm pixel sensor.Hence, using a 40� objective may be betterthan adding an optical element such as a
MICROSCOPY AND ANALYSIS JULY 20114
Table 3: Optical throughput (thin sample).
Table 4: Radiometric parameters (no coupler) - Detected photons per second.
Table 5: Radiometric parameters (0.5X coupler) - Detected photons per second.
DIGITAL CAMERAS PART2
coupler into the optical path.
CONCLUS IONSThis article develops a methodology for under-standing the balance between image resolu-tion, sampling and optimal light throughputin microscopy systems where digital imagingtechnologies (CCD, EMCCD, and scientific-grade CMOS) are utilized.
Realistic experimental conditions, such asmolecular brightness, dye concentrations, NA,and magnification, were used to compareNyquist sampling and light throughput for thethree camera technologies at different pixelsizes. The comparison showed that whilesmaller pixel cameras were able to providebetter sampling at low magnifications, theywere also less efficient at collecting light thanthe medium pixel scientific-grade CMOS andCCDs and the larger pixel EMCCDs.
Use of a 0.5� coupler was shown to increasethe signal in smaller pixel cameras while stillmaintaining adequate sampling. However, theaddition of another optical element in thedetection arm is not always desirable becauseof additional light loss and the introduction ofoptical aberrations reducing resolution.Another exciting possibility, which uses a smallpixel size camera with a lower magnification,high NA objective to improve light collectionwhile adequately sampling, has also been pro-posed.
The next article in this series will discussnoise sources in digital imaging technologiesand compare and contrast them for CCD, sci-entific-grade CMOS, and EMCCD camerasspecifically.
Table 6: 6.5 µm pixel versus 3.5 µm pixel with 0.5X coupler.
MICROSCOPY AND ANALYSIS JULY 2011 5
REFERENCES1. Sabharwal, Y. Digital Camera Technologies for Scientific Bio-
Imaging. Part 1: The Sensors. Microscopy and Analysis25(4):S5-S8 (AM), 2011.
3. www.olympusmicro.com/primer/anatomy/objectives.html4. http://micro.magnet.fsu.edu/primer/anatomy/
imagebrightness.html5. Muller, J. D. Cumulant analysis in fluorescence fluctuation
spectroscopy. Biophys. J. 86:3981–3992, 2004.6. http://zeiss-campus.magnet.fsu.edu/articles/lightsources/
©2011 John Wiley & Sons, Ltd