basics of light microscopy

Upload: lorian-cobra-straker

Post on 10-Apr-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/8/2019 Basics of Light Microscopy

    1/56

    Basics ofLight MicroscopyImaging

    & ImagingMicroscopy & RESEARCH DEVELOPMENT PRODUCTIONSPECIAL EDITION OF

    www.gitverlag.com

  • 8/8/2019 Basics of Light Microscopy

    2/56

    O l y m p u s M i c r o s c o p y

    For more information, contact:

    Olympus Life and Material Science Europa GmbHPhone : +49 402 377354 26E-mail: [email protected]

    If youre used to other stereo microscopes, youd better beprepared before you start working with the new Olympus SZX2range. The peerless resolution and exceptional light efficiencyenable you to see fluorescence with brightness and brilliancelike never before, while providing optimum protection for yourspecimens at the same time. All this makes Olympus SZX2

    stereo microscopes the perfect tools for precise fluorescenceanalysis on fixed and living samples. Importantly, the innovationspresent in the SZX2 series go far beyond fluorescence. Theoptimised 3-D effect offers superior conditions for specimenpreparation and precise manipulation. The unique ultra-flatLED light base provides bright and even illuminationof the entire field of view at a constant colourtemperature. Moreover, the Comfort View eyepiecesand new trinocular observation tubes make sureyour work is as relaxed as it is successful. The futurelooks bright with Olympus.

    GIVE YOUR EYES TIME TO ADJUST TO THEHIGH LIGHT EFFICIENCY. OLYMPUS SZX2.

  • 8/8/2019 Basics of Light Microscopy

    3/56

  • 8/8/2019 Basics of Light Microscopy

    4/56

    Dear ReaderA Huge Number ofTechniques in a RelativelyShort Space

    Although microscopes are becoming more and more easy to use, it still remains important to have an

    appreciation of the fundamental principles that determine the resolution and contrast seen in micro-scope images. This series of articles, by authors from Olympus, aims to provide an overview of thephysical principles involved in microscope image formation and the various commonly used contrastmechanisms together with much practically oriented advice. Inevitably it is impossible to cover any of the many aspects in great depth. However, their approach, which is to discuss applications and pro-vide practical advice for many aspects of modern day microscopy, will prove attractive to many.

    The articles begin by setting the scene with a discussion of the factors that determine the resolvingpower of the conventional microscope. This permits the introduction of important concepts such asnumerical aperture, Airy disc, point spread function, the difference between depth of focus and depthof field and the concept of parfocality. Common contrast mechanisms such as darkfield and phasecontrast are then introduced, followed by differential interference contrast and polarisation contrast

    imaging. Throughout the discussion, the importance of digital image processing is emphasised andsimple examples such as histogram equalisation and the use of various filters are discussed.

    The contrast mechanisms above take advantage of the fact that both the amplitude and phase of thelight is altered as it passes through the specimen. Spatial variations in the amplitude (attenuation)and/or the phase are used to provide the image contrast. Another extremely important source of im-age contrast is fluorescence, whether it arise naturally or as a result of specific fluorescent labels hav-ing been deliberately introduced into the specimen. The elements of fluorescence microscopy andtechniques of spectral unimixing are discussed and brief mention is made of more advanced tech-niques where spatial variations in, for example, fluorescence lifetime are used to provide image con-trast. Finally, techniques to provide three-dimensional views of an object such as those afforded bystereo microscopes are discussed together with a very brief mention of the confocal microscope.The authors have attempted the very difficult task of trying to cover a huge number of techniques in arelatively short space. I hope you enjoy reading these articles.

    Tony Wilson Ph.DProfessor of Engineering ScienceUniversity of OxfordUnited Kingdom

    Unique presentation of technical aspectsin connection with image processing

    As a regular reader of Imaging & Microscopy, the title of this issue, Basics of Light Microscopy &Imaging, must certainly seem familiar to you. From March 2003 to November 2005, we had a seriesof eleven articles with the same name and content focus.

    In collaboration with the authors, Dr Manfred Kssens, Dr Rainer Wegerhoff, and Dr Olaf Weidlich of Olympus, a lasting basic principles series was created that describes the different terms and tech-

    niques of modern light microscopy and the associated image processing and analysis. The presenta-tion of technical aspects and applications in connection with image processing and analysis wasunique at that time and became the special motivation and objective of the authors.

    For us, the positive feedback from the readers on the individual contributions time and again con-firmed the high quality of the series. This was then also the inducement to integrate all eleven articlesinto a comprehensive compendium in a special issue. For this purpose, the texts and figures have beenonce more amended or supplemented and the layout redesigned. The work now before you is a guidethat addresses both the novice and the experienced user of light microscopic imaging. It serves as areference work, as well as introductory reading material.

    A total of 20,000 copies of Basics of Light Microscopy & Imaging were printed, more than doublethe normal print run of Imaging & Microscopy. With this, we would like to underline that we are justas interested in communicating fundamental information as in the publication of new methods, tech-nologies and applications.

    Enjoy reading this special issue.

    Dr. Martin FriedrichHead Imaging & MicroscopyGIT Verlag, A Wiley CompanyGermany

    2 Basics of light M icroscopy & i Maging

  • 8/8/2019 Basics of Light Microscopy

    5/56

    References[1] Amos B. (2000): Lessons from the history of light microscopy. Nat Cell Biol. 2(8), E151-2.[2] Wilson T., Sheppard C. (1984): Theory and practice of scanning optical microscopy. Academic Press, London.[3] Diaspro A. (2001): Confocal and two-photon microscopy : foundations, applications, and advances. Wiley-Liss, New York.[4] Pawley JB. (2006): Handbook of Biological Confocal Microscopy, 3rd edition, Plenum-Springer, New York.[5] Hawkes PW., Spence JCH. (2006): Science of Microscopy, Springer, New York.

    [6] Masters BR. (2006): Confocal Microscopy And Multiphoton Excitation Microscopy: The Genesis of Live Cell Imaging, SPIE Press Monograph Vol. PM161, USA.[7] Hell SW. (2003): Toward fluorescence nanoscopy. Nat Biotechnol. 21, 1347.[8] Jares-Erijman EA., Jovin TM. (2003): Nat Biotechnol. 21(11), 1387.[9] Tsien RY. (2006): Breeding molecules to spy on cells. Harvey Lect. 2003-2004, 99, 77.[10] Tsien RY. (1998): The green fluorescent protein. Annu Rev Biochem. 67, 509.[11] Pozzan T. (1997): Protein-protein interactions. Calcium turns turquoise into gold. Nature. 388, 834[12] Diaspro A. (2006): Shine on ... proteins. Microsc Res Tech. 69(3),149[13] Periasamy A. (2000): Methods in Cellular Imaging, Oxford University Press, New York.[14] Becker W. (2005): Advanced Time-Correlated Single Photon Counting Techniques, Springer, Berlin.[15] Periasamy A., Day RN. (2006): Molecular Imaging : FRET Microscopy and Spectroscopy, An American Physiological Society Book, USA.

    You Can GetKeywords for ExpandingYour Knowledge

    There is a great merit for publishing Basics of LIGHT MICROSCOPY and IMAGING: From Colour toResolution, from Fluorescence to 3D imaging. It is a great merit since it is very important to define, toclarify, and to introduce pivotal elements for the comprehension and successive utilisation of opticalconcepts useful for microscopical techniques and imaging. The niche still occupied by optical micros-copy, within a scenario where resolution obsession plays an important role, is mainly due to theunique ability of light-matter interactions to allow temporal three-dimensional studies of specimens.This is of key relevance since the understanding of the complicate and delicate relationship existingbetween structure and function can be better realised in a 4D (x-y-z-t) situation. In the last years, sev-eral improvements have pushed optical microscopy, from confocal schemes [1-6] to multiphoton ar-chitectures, from 7-9 folds enhancements in resolution to single molecule tracking and imaging allow-ing to coin the term optical nanoscopy [7]. Advances in biological labelling as the ones promoted bythe utilisation of visible fluorescent proteins [8-12] and of overwhelming strategies like the F tech-niques (FRAP fluorescence recovery after photobleaching, FRET fluorescence resonance energytransfer, FCS fluorescence correlation spectroscopy, FLIM fluorescence lifetime imaging micros-copy) [4, 13-15] collocate optical microscopy in a predominant position over other microscopical tech-niques. So it becomes mandatory for primers, and more in general for all those researchers looking foranswers about their biological problems that can be satisfied by using the optical microscope, to havea good starting point for finding the optimal microscopical technique and for understanding what canbe done and what cannot be done. This long note on Basics can be a good point for starting. It bringsthe reader through different concepts and techniques. The reader can get the keywords for expandingher/his knowledge. There are some important concepts l ike the concept of resolution and the relatedsampling problems, spectral unmixing and photon counting that are introduced for further readings.This article reports some interesting examples and a good link between the different mechanisms of contrast, from DIC to phase contrast until fluorescence methods. Treatment is not rigorous, but it keepsthe audience interested and is sufficiently clear. I read it with interest even if I would prefer to havemore bibliographic modern references to amplify the achievable knowledge.

    Alberto Diaspro Ph.DProfessor of Applied PhysicsUniversity of GenoaItaly

    Imprint

    Published byGIT VERLAG GmbH & Co. KG

    Sector ManagementAnna [email protected]

    Journal ManagementDr. Martin [email protected]

    AuthorsDr. Rainer [email protected]. Olaf [email protected]. Manfred [email protected]

    Editorial Assistance

    Lana [email protected]

    Production ManagerDietmar [email protected]

    LayoutRuth [email protected]

    Litho and CoverdesignElke [email protected]

    GIT VERLAG GmbH & Co. KGRsslerstrasse 9064293 DarmstadtGermanyTel.: +49 6151 8090 0Fax: 49 6151 8090 133www.gitverlag.com

    In Cooperation withOlympus Life and Material Science Europa GmbHWendenstrasse 1418D-20097 Hamburg, GermanyTel.: +49 40 23772-0

    Fax: +49 40 23 77 36 47www.olympus-europa.com

    Printed byFrotscher DruckRiedstrasse 8, 64295 Darmstadt, Germany

    Circulation20,000 copies

    The publishing house is granted the exclusive right, withregard to space, time and content to use the works/ editorial contributions in unchanged or edited form forany and all purposes any number of times itself, or totransfer the rights for the use of other organizations inwhich it holds partnership interests, as well as to thirdparties. This right of use relates to print as well as elec-tronic media, including the Internet, as well as databases/ data carriers of any kind.

    All names, designations or signs in this issue, whetherreferred to and/or shown, could be trade names of therespective owner.

    Basics of light M icroscopy & i Maging 3

  • 8/8/2019 Basics of Light Microscopy

    6/56

    Trust the Colours

    Basics of light M icroscopy & i Maging t rust the c olours

  • 8/8/2019 Basics of Light Microscopy

    7/56

    Light

    How do we describe light that reachesour eyes? We would probably use twokey words: brightness and colour. Let uscompare these expressions to the waylight is characterised in natural sciences:

    It is looked upon as an electromagneticwave having specific amplitude and dis-tinct wavelength. Both, the workadayand the scientists perspective essentiallymean the same. The waves amplitudegives the brightness of the light, whereasthe wavelength determines its colour.Figure 1 shows how colour and wave-length are related.

    Colour

    Colour always arises from light. There isno other source. The human eye can per-ceive light in the colours of a wavelengthrange between 400 and 700 nm. But in-stead of having just one distinct colour,light is usually composed of many frac-tions with different colours. This is whata spectrum of a light implies. For exam-ple look at the spectrum of our lifesource, the central sun, which emits lightof all different colours (fig. 2).

    White light

    All colours of a light source superimpose.So light sources have the colour depend-ent on the predominant wavelength pal-ette. A candle for example may lookyellowish, because the light mainly com-prises wavelengths in the 560600 nmrange. Light from sources emitting in thewhole spectral range with somewhat com-parable amounts appears as white light.

    Green grass

    What makes us see objects, which are notlight sources themselves? There are dif-

    ferent processes, which happen whenlight meets physical matter. They arecalled reflection, refraction, diffractionand absorption. They all happen together,but usually one process dominates. Seefor example grass or other dense physicalmatters: What we see when observinggrass is mainly reflected light. But whydoes grass appear to be green? The rea-son is that grass reflects only portions of the white daylight. At the same time it ab-sorbs the red and blue portions of thedaylight. Thus the green light remains.

    Colour models

    Much that we know and observe relatedto colour can be mathematically described

    by different colour models, i.e. in differ-ent colour spaces. These are what the to-kens HSI, RGB, and CYMK stand for. Eachof these models gives a different perspec-tive and is applied in fields where it canbe used more easily than the other mod-els. But nevertheless each model de-

    scribes the same physical reality.

    HSILet us stick to the grass and assume it isfresh spring grass. How would we de-scribe it in everyday language? We wouldcharacterise it as green and not as blueor orange. This is the basic hue of a col-

    Fig. 1: Visible light spectrum.

    Fig. 2: Spectrum of the sun, and spectra of com-mon sources of visible light.

    our. We would probably characterise itadditionally as full or deep and notas pale. This is the saturation of a colour.

    Then one would describe it as bright and not as dark. This is the brightness orintensity of a colour. Hue, Saturation,and Intensity form the HSI colour model.The Munsell Colour Tree (fig. 3) gives athree-dimensional geometrical represen-tation of this model. Here, the hue valueis represented by an angle, the satura-tion by a radius from the central axis,and intensity or brightness by a verticalposition on the cylinder.

    RGB

    The HSI model is suitable to describe anddiscern a colour. But there is another col-our model, which better mirrors how ourhuman perception mechanism works.The human eye uses three types of cone

    t rust the c olours Basics of light M icroscopy & i Maging

  • 8/8/2019 Basics of Light Microscopy

    8/56

    cell photo receptors which are sensitiveto light, respectively in the red V(R),green V(G), and blue V(B) spectral range.These colours are known as primary col-ours. The clue is that all of the existingcolours can be produced by adding vari-

    ous combinations of these primary col-ours. For example, the human eye per-

    ceives an equal amount of all the threecolours as white light. The addition of anequal amount of red and blue light yieldsmagenta light, blue and green yieldscyan, and green and red yields yellow(fig. 4). All the other colours are gener-

    ated by stimulation of all three types of cone cells to a varying degree. The prac-

    tical application of the RGB model ismany-fold: For example, besides humanperception, digital cameras, monitors,and image file formats also function in away which can be described by the addi-tion of the three primary colours.

    CYMThe overlap of the three primary additivecolours red, green, and blue creates thecolours cyan (C), yellow (Y), and magenta(M). These are called complementary orprimary subtractive colours, becausethey are formed by subtraction of red,green, and blue from white. In this way,yellow light is observed, when blue lightis removed from white light. Here, all theother colours can be produced by sub-tracting various amounts of cyan, yellow,and magenta from white. Subtraction of all three in equal amount generatesblack, i.e. the absence of light. Whitecannot be generated by the complemen-tary colours (fig. 4). The CYM model andits more workable extension, the CYMKmodel, find their applications in the tech-nology of optical components such as fil-ters as well as for printers, for example.

    Colour shift

    Let us now examine a colour phenome-non which astonishes us in daily life: thecolour shift. This also gives us the chanceto take the first step into light microscopyand look closer into its typical lightsources halogen bulb, xenon arc lampand mercury arc lamp.

    It is a common experience to buy apair of black trousers that is obviouslydark blue when you look at them backhome. This inconvenient phenomenon of colour shift is not restricted to the bluetrousers. The so-called white light thatis generated by a halogen bulb at a mi-croscope, differs a lot from light from axenon burner. At first glance it is the in-

    tensity that is obviously different, buteven if you reduce the light intensity of axenon burner, the halogen light will giveyou a more yellowish impression whenprojected onto a white surface. Further-more, dimming the halogen bulb lightcan make the colour even more red-like.This can be easily observed at the micro-scope if you focus on a white specimenarea and increase the light power slowly.The image observed will change from ayellow-red to a more bluish and verybright image. This means that with in-

    creasing power, the intensity or availa-bility of different wavelengths (colours)has been changed. An additional aspectto consider here are the subsequent lightperception and interpretation. They areFig. 4: Primary colours.Fig. 3: Munsell Colour Tree.

    Basics of light M icroscopy & i Maging t rust the c olours

  • 8/8/2019 Basics of Light Microscopy

    9/56

    Fig. 5a-c is showing the emission spectra of three typically used light sources at a microscope: (a) the tungsten halogen bulb, T F= colour temperature atdifferent power sett ings, (b) the mercury burner, (c) the xenon burner.

    done by eye and brain and result in theeventually recognised colour.

    But let us come back to the light gen-eration. The overall spectrum of a lightsource at a defined power setting is de-scribed with the term colour tempera-ture. The term colour temperature is ahelp to describe the spectrum of lightsources as if a black piece of ideal metalis heated up. If a temperature of about3500 K is reached, this metal will have ayellowish colour. This colour will changeinto a bluish white when it reaches6000K.

    For a 12V / 100W tungsten halogenlamp at +9 Volt the colour temperature isapproximately 3200K (fig. 5a) whereasfor a 75 W xenon burner it is 6000K. Sothe colour temperature gives us a goodhint about the overall colour shift. Yet itwill not give an idea of the individual in-tensities at defined wavelengths. Thisknowledge is of high importance if wehave to use a light source for example forfluorescence microscopy. In this casethe light source has to produce a suffi-cient intensity of light in a range of wavelengths that match the excitationrange of the fluorochrome under obser-

    vation.

    Microscope Light Sources

    Tungsten halogen bulbsNearly all light microscopes are equippedwith a halogen lamp (10W100W) eitherfor general use, or as an addition to an-other light source. A wide range of opti-cal contrast methods can be driven withthis type of light source, covering allwavelengths within the visible range but

    with an increase in intensity from blue tored. Additionally, the spectral curve al-ters with the used power (fig. 5a). Toachieve similar looking colours in theprevalent brightfield microscopy the

    power setting should be kept at one level,e.g. 9V (T F= 3200 K; colour temperatureat +9V). This light intensity level is oftenmarked at the microscope frame by aphoto pictogram.

    But here a problem arises: If the lightintensity is to be kept fixed in the lightmicroscope the light might be too brightfor observation. In daily life if sunlight istoo bright sunglasses help but they

    Box 1: Centring of a mercury burner

    Depending on the type (inverted or upright) and manufacturer of the micro-scope there will be some individual differences but the strategy remains thesame.

    Please also see the instruction manual of the microscope.

    1. Start the power supply for the burner, use a UV-protection shield andensure that a mirror-cube is in the light path.

    2. Locate a white paper card on the stage of the microscope and open theshutter. If the light is too bright insert optional available neutral densityfilters (e.g. ND25 only 25 % of the light intensity will pass through).

    3. Bring the focus to the lowest position.

    4. Get a free objective position or on upright microscopes a 20x objectivein the light path.

    5. If available, open the aperture stop and close the field stop.

    6. Optimise brightness with the burner centring knobs (mostly located atthe side of the lamp house)

    7. From the lamp house, an image of the arc of the burner itself and themirrored image are projected on the card. To see them in focus, use acollector focussing screw (Figure A).

    8. If only one spot can be located or the second is not the same size, themirror (Figure B) has to be re-centred as well. At most lamp housesthere are screws at the back for screwdrivers for this option. Rotatethem until the images have the same size as shown in (Figure C).

    9. Locate both images parallel to each other and overlay them by using the burner centring screws(Figure D).

    10. Defocus the images with the collector focusing screw and open the field stop.

    11. Remove the white paper card and bring a homogenous fluorescent specimen into focus (e.g. try somecurry on the cover slip).

    12. Fine adjustment of the homogenous illumination can only be performed under observation: if necessaryreadjust the collector focusing screw so that the total field of view is equally illuminated.

    13. If digital acquisition is of prime interest, the fine adjustment can also be performed at the monitor, tooptimise the illumination for the size of the CCD.

    t rust the c olours Basics of light M icroscopy & i Maging

  • 8/8/2019 Basics of Light Microscopy

    10/56

    might not only reduce the intensity butalso let us see the world in other colours.In light microscopy there are light filtersthat only reduce the intensity. These fil-ters are called neutral density filters (ND)or neutral grey filters. They are charac-terised by the light that they will trans-mit. Therefore, a ND50 will allow half the light intensity to pass throughwhereas a ND25 will reduce the intensityto 25 % without changing the colours. If the spectrum is changed, we will havecolour filters. There is a huge variety of colour filters available but here we willonly discuss the so-called light balancingdaylight filter (LBD).

    This filter is used together with thehalogen light source to compensate forthe over distribution of the long (red)wavelengths. This enables us to see the

    colours of a stained pathology section forexample on a neutral white backgroundin brightfield microscopy (fig. 11).

    Mercury Arc lampThe mercury burner is characterised bypeaks of intensity at 313, 334, 365, 406,435, 546 and 578nm and lower intensi-ties at other wavelengths (see fig. 5b).This feature enables the mercury burnerto be the most used light source for fluo-rescence applications. Whenever thepeak emissions of the burner match theexcitation needs of the fluorochromes agood (depending on the specimen) signalcan be achieved. However, these benefitsare reduced by a relative short lifetime of the burner of about 300 hours and asmall change of the emission spectrumdue to deposits of cathode material to the

    inner glass surface of the burner withongoing lifetime.

    Xenon Arc lamp (XBO)Xenon burners are the first choice lightsources when a very bright light isneeded for reflected microscopy, such asdifferential interference contrast on darkobjects, or quantitative analysis of fluo-rescence signals as for example in ionratio measurement. They show an evenintensity across the visible spectrum,brighter than the halogen bulb but theydo not reach the intensity peaks of themercury burners. The xenon burneremission spectrum allows the direct anal-ysis of intensities at different fluorescenceexcitation or emission wavelengths. Thelifetime is of about 5003000 hours de-pending on use (frequent on/off switch-ing reduces the lifetime), and the type of burner (75 or 150W). For optimisation of illumination especially with the mercuryand xenon burners the centring andalignment is very important.

    Coming to the point

    To ensure that the light source is able toemit the required spectrum is one part of the colour story; to ensure that the objec-tive lenses used can handle this effec-tively is another. When white light ispassing through the lens systems of anobjective refraction occurs. Due to thephysics of light, blue (shorter) wave-lengths are refracted to a greater extentthan green or red (longer) wavelengths.

    This helps to create the blue sky but isnot the best for good colour reproductionat a microscope. If objectives did notcompensate this aberration then as out-lined in fig. 6 there would be focus points

    Table 1: Comparison of different light sources and their colour temperature performance

    Light source Colour temperatureVacuum lamp (220 W / 220 V) 2790 KNitraphot (tungsten filament) lamp B (500 W / 220 V) 3000 KPhoto and cinema lamps as well as colour control lamp (Fischer) 3200 KPhoto and cinema (e.g., nitraphot (tungsten filament) lamp S) 3400 KYellow flash bulb 3400 KClear flash bulb 3800 KMoonlight 4120 KBeck arc lamp 5000 KWhite arc lamp as well as blue flash bulb 5500 KElectron flash unit 5500-6500 KSun only (morning and afternoon) 5200-5400 KSun only (noontime) 5600 KSun and a cloudless sky 6000 KOvercast sky 6700 KFog, very hazy 7500-8500 KBlue northern sky at a 45 vertical angle 11000 KInternational standard for average sunlight 5500 K

    Basics of light M icroscopy & i Maging t rust the c olours

  • 8/8/2019 Basics of Light Microscopy

    11/56

    Fig. 9: Bit depth and grey levels in digital images.Fig. 8: Bayer colour filter mosaic array and underlying photodiodes.

    (fig. 8). These filters ensure that onlygreen, red or blue light is permitted tocome into contact with the light-sensitivepart of the single sensors or pixels (pic-ture element). The proportion of coloursis generally two green pixels to one redand one blue pixel. The colour image isactually generated by the computer ini-tially via complex algorithms. This in-volves assessing and processing the re-spective signals of the green, red andblue pixels accordingly. For example, asingle pixel with a bit depth of 8 becomesa 24-bit colour pixel (fig. 9).

    Depending on the imaging method,the reflected or transmitted light of theobject is focused onto a light-sensitiveCCD sensor. So what is a CCD (ChargeCoupled Device) element? This is semi-conductor architecture designed to readout electric charges from defined storageareas (figs. 7, 10). Due to the special rea-dout method, the light-sensitive area of apixel is restricted to just 20 % of the ac-tual pixel surface. This is why special

    for all colours along the optical axes.This would create colour fringes sur-rounding the image.

    To reduce this effect achromatic lenscombinations have been used since the18th century. These types of lenses com-bine the blue and red wavelengths to onefocus point. Further colour correction atthe visible range can be achieved by add-ing different sorts of glass systems to-gether to reach the so-called Fluorite ob-

    jectives (the name originally comes fromthe fluorspar, introduced into the glassformulation). They are also known as Ne-ofluar, Fluotar, or Semi-apochromat. Tocorrect the complete range from near in-frared (IR) to ultraviolet (UV) the finestclass of objectives, apochromate, hasbeen designed. Beside colour correctionthe transmission power of an objectivecan also be of prime interest. This is es-pecially the case if high power near UV isneeded at e.g. 340 nm (excitation of thecalcium sensitive dye Fura-2) or IR isneeded for 900 nm IR-DIC.

    The look of colour

    Let us now go one step further and seehow the light with its colours after hav-ing passed the microscope is detectedin modern camera systems. Here, digitalcameras have become standard for ac-quisition of colour or monochrome im-ages in the microscopy field. But how dothese cameras work and how can it beguaranteed that the colours in the im-ages they provide are accurate? This isof critical importance because it is a pri-mary prerequisite for all activities suchas image documentation, archiving oranalysis.

    Detecting colour

    Light detection functions in the same wayfor both colour and monochrome cam-eras. Fig. 7 illustrates how CCD digitalcameras (Charged Coupled Device) func-tion. In colour cameras, mosaic filtersare mounted over the CCD elements

    Fig. 6: Schematic of axial chromatic aberration. Fig. 7: The principle behind how CCD cameras function.

    t rust the c olours Basics of light M icroscopy & i Maging

  • 8/8/2019 Basics of Light Microscopy

    12/56

    lenses above the sensor focus all incom-ing light onto the light-sensitive area of the pixel surface. The light generateselectron-hole pairs via the photoelectriceffect. These electrical charges are col-lected, combined into charge packets andsubsequently transported through theentire CCD sensor. The charge is con-verted into a voltage first becauseprocessing voltage is significantly easierthan processing current. This analogueoutput signal is then amplified firstly onthe chip itself and then again outside thechip. An analogue/digital converter con-verts the voltage (the signal) into a bi-nary format.

    There are various bit depths, with thestandard being 8 bit. This means that 256combinations (intensity values) are avail-able for each pixel (figs. 9, 10). Currently,

    many new cameras appear on the marketwith bit depths of 12 bit (4096 intensitylevels). There are even cameras offeringbit depths up to 16 bit (65656 intensitylevels). The number of pixels a camerahas depends on the CCD chip used in itand the technology to create the image.

    At the moment cameras are on offer witha resolution of up to 12.5 million pixels.

    Colour temperature and white balance

    Let us now continue on the colour tem-

    perature subject of the microscope lightsources and review its impact on the col-ours in digital imaging. As we know lightis essential for both microscopes andcameras. There are many types of light,as was explained above for microscopelight sources. The sun may yield differentcolours depending on the time of day andin the microscope there are variationsdepending on the source used as well as

    on the acquisition environment. The rea-son for this is what is referred to as col-our temperature. As we have seen thisterm describes a specific physical effect i.e., the spectral compositions of differ-ing light sources are not identical andfurthermore are determined by tempera-ture (see Table 1). This means that col-our temperature is of tremendous signifi-cance with regard to digital imageacquisition and display as it influencesboth colour perception and colour dis-play to such a critical degree. A personseyes automatically correct against thiseffect subconsciously by adapting tochanging lighting conditions. This meansthat he/she will see those things that areknown to be white as white.

    Digital or video cameras are unfortu-nately not intelligent enough to register

    changing light conditions on their ownand to correct against the resulting col-our shifts. This is why these kinds of cameras often yield images whose col-ours are tinged. Correcting colour tinge(s)in true-colour images is known as whitebalance (see fig. 11). Many cameras tendto adapt image colours at acquisition,this kind of colour displacement can becorrected retroactively. To do so, re-quires an image with an area where theuser knows there should be no colour. Infact, this particular image area should be

    black, white or grey.

    Automated white balance adjustments

    Most imaging software programmes of-fer embedded algorithms to correct forcolour tinges. To do this, the user has todefine a section interactively within theimage area where it is certain that thepixels should be white, black or grey (butat present are tinged). First of all, threecorrection factors will be calculated

    based on the pixels within the section one for each of the three colour compo-nents Red (R), Green (G), Blue (B). Thesecorrection factors are defined such thatthe pixels within the section will be greyon average the section will have no col-our at all. The whole image is then cor-rected automatically using these correc-tion factors (fig. 11c, d).

    The following is a more detailed de-scription of what takes place. The aver-age intensity I for each pixel (n) withinthe section will be calculated: I n=(R+G+B)/3. The colour factor (F) for eachcolour component, of each pixel withinthe section will then be determined basedon this calculation, e.g., for the red col-our factor: F n (R) = (I n /R).

    Take a look at this example. A pixelhas the following colour components:

    (R,G,B) = (100,245,255) thus an averageintensity of I n =200 and a red colour fac-tor of F n (R) = (200/100) = 2.0. The threecolour factors will be averaged for allpixels within the circle, meaning that acorrection factor (, and) will be determined for each col-our component. And now, the colourcomponents of all the images pixels willbe multiplied by the corresponding cor-rection factor(s).

    Why do colours not match easily?

    It would be ideal to be able to pick anydigital camera, monitor or printer at ran-dom and have the resulting colours on-screen and in the printed output be ac-ceptably close to the original. Butunfortunately, getting colours to match ishard to achieve.

    For one thing, colour is subjective.This means that colour is an intrinsicfeature of an object and colours arepurely subjective as interpreted by thevisual system and the brain. Anothercritical aspect is that lighting affects col-

    Fig. 10: Creation of a digital image.

    10 Basics of light M icroscopy & i Maging t rust the c olours

  • 8/8/2019 Basics of Light Microscopy

    13/56

    Fig. 11: White balance adjustment using microscope and software options on a histological specimen.a: Image with wrong microscope settings (low power 3V, no LBD filter), note that areas that are not stained (background) show ayellowish colour.b: image using optimised microscope settings (9V + LBD filter + Neutral density filer ND6), note that the background becomeswhite.c: Image after digital white balancing of image (a), note that with the help of a software white balancing on the yellowish image(a) the colours can be recalculated to some extend to a well balanced image (compare with b or d).d: Image after digital white balancing of image (b), note that an already well balanced image (b) can still be enhanced in quality.Areas that are used to define (neutral background) regions of interest (ROI), are marked with (x).

    our. So a printout will vary depending onthe lighting. It will look different underincandescent light, fluorescent light andin daylight. Furthermore, colours affectother colours. This means your percep-tion of a colour will change depending onthe colours surrounding that colour. Inaddition, monitors can display coloursthat printers cannot print. Printers canprint colours that monitors cannot dis-play. Cameras can record colours thatneither monitors nor printers can pro-duce. A colour model is at the very leastsimply a way of representing coloursmathematically. When different devices

    use different colour models they have totranslate colours from one model to an-other. This often results in error. Givenall the limitations inherent in trying tomatch colours, it is important to under-stand the difference between correctablecolour errors and non-correctable er-rors. Correctable errors are ones whereyou can do something about them viasoftware such as with a colour man-agement system or white balance. Non-correctable errors are the ones that youcant do anything about because the in-

    formation you need to correct them sim-ply does not exist. Better lenses, betteroptical coatings and improved CCD ar-rays can all minimise the non-correcta-ble errors. Correctable errors, however,

    require colour management throughsoftware.

    If every program used the same ap-proach to colour management and everyprinter, monitor, scanner and camerawere designed to work with that colourmanagement scheme, colour manage-ment would obviously be a snap. If thehardware devices all used the samestandard colour model, for example, orcame with profiles that would translatecolour information to and from a stand-ard as needed, youd be able to move col-our information around from programto program or scanned image to printer

    without ever introducing errors.

    Conclusion

    The light that reaches our eye consists of many different colours, and differentlight sources produce a different mix of these. Different objects in the world ab-sorb and reflect different wavelengths that is what gives them their colour.

    There are many colour models to mapthis multi-dimensional world, each de-scribing the same physical reality but

    having different applications. Our visionsystems give us only a partial view of it.For effective imaging, the microscopist

    must be aware of the complexity of theworld of light. There is a variety of light

    sources, objectives and filters, to enablethe assembly of a system with the appro-priate properties for a particular appli-cation. The photographer in turn workswith an imaging system which takes par-tial information and processes it toachieve colour balance. Printers andmonitors in turn repeat the process andremove one further from the originaldata.

    Several hundred years in the develop-ment of optics have managed to hidesome of the complexity from the casualuser. But microscopy is a good deal morecomplicated than it might seem. An

    awareness of what is really going on inour eyes, in our brains, in the optics, onthe CCD and on the screen enables usto appreciate just what a wonderfulachievement that final image is, and howmuch care is required to ensure the bestresults.

    t rust the c olours Basics of light M icroscopy & i Maging 11

  • 8/8/2019 Basics of Light Microscopy

    14/56

    The Resolving Power

    12 Basics of light M icroscopy & i Maging t he r esolving p ower

  • 8/8/2019 Basics of Light Microscopy

    15/56

    area, spread over the total amount of sensory cells. More individual sensorscan catch differences and translate theminto a recognised image.

    Unfortunately, we can move closeronly up to a certain distance, after whichour flexible optical system is no longer

    able to project the image clearly onto theretina. If the words are typed too small,like these [you can not resolve it withoutoptical help] Youcannot resolveit without optical help, our resolution isreached.

    The wavelength that transports theinformation of the image also influenceswhat we can resolve. Shorter blue wave-lengths carry finer details than longerones from the same object. Using a mi-croscope it would be easy to read the textand to document it via the sensors of acamera. But these tools are also re-stricted by the resolution of their opticalsystem and the number and sensitivity of sensors the pixels.

    What is resolution?

    Resolution or resolving power of a micro-scope can be defined as the smallest dis-tance apart at which two points on aspecimen can still be seen separately. Inthe transmitted light microscope, the xy-resolution R is determined essentially bythree parameters: the wavelength of

    the illuminating light, and the numericalaperture (NA) of the objective (NA obj) aswell as the condenser (NA cond ).

    R = 1.22* / (NA obj+NA cond ) (1)

    When the aperture of the condenser isadjusted to that of the objective, i.e. theaperture of the condenser is essentiallythe same as the objective aperture, theequation (1) simplifies to:

    R = 0.61* /NA obj (2)

    This equation is used for both transmit-ted light microscopy and for reflected

    Human vision

    For most of us, seeing something is nor-mal and we are accustomed to evaluatethings by looking at them. But the seeingact uses many different and complexstructures that enable us not only to

    catch images but to process and inter-pret them at the same time. Every imagethat is projected on the retina of our eyeis transformed into neuronal impulsescreating and influencing behaviour: thisis what we call seeing. Therefore, whatwe see is not necessarily what anotherperson sees.

    Compared to these processes withinour brain, the first steps of vision seem tobe much simpler. To create a projectiononto the layer of the retina, where mil-lions of light sensitive sensory cells arelocated, the light passes through an opti-cal system that consists of cornea, aque-ous humour, iris, pupil, focus lens andthe vitreous humour, see fig. 12. All theseelements together create what the rasterframework of sensory cells can translatewithin their capability into neuronal ac-tivity.

    Eagles and mice

    Those two different set ups the opticalsystem and the sensor system restrict

    the options of human vision at the veryfirst step. Details can be found at www.mic-d.com/curriculum/lightandcolor/ humanvision.html. Having eagle eyed op-tical systems alone will not enable us tosee mice from far away. To resolve asmall object (the mouse) against a largebackground needs special physical prop-erties of the optics and also requires acertain sensitivity and number of sensors

    used to translate the information.If this text is getting too small, you have to come closer to resolveit.This means your eyes can get light fromthe small letters in a wider angle thanbefore. You are now focusing on a smaller

    Fig. 12: Anatomy of the human eye and cross section of the retina.

    t he r esolving p ower Basics of light M icroscopy & i Maging 13

  • 8/8/2019 Basics of Light Microscopy

    16/56

    light microscopy. Note here that resolu-tion is NOT directly dependent on themagnification. Furthermore the endmagnification should not be higher than1000x the NA of the objective, becausethen the image will be only enlarged butno further resolution will be visible. Thisis called empty magnification.

    What is the numerical aperture of anobjective?

    The numerical aperture of a microscopeobjective is a measure of its ability togather light and thus resolve fine speci-men detail at a fixed objective distance.The numerical aperture is calculatedwith the following formula:

    NA = n*(sin ) (3)

    n is the refractive index of the mediumbetween the front lens of the objectiveand the specimen cover glass. It is n =1.00 for air and n = 1.51 for oil. is onehalf the angular aperture, as can be seenin fig. 13. The bigger , the higher is thenumerical aperture. Working in air, thetheoretical maximum value of the nu-merical aperture is NA = 1 ( = 90). Thepractical limit is NA = 0.95.

    Numerical aperture in practiceFor all microscope objectives the resolv-ing power is mentioned with the numberfor the numerical aperture shown di-

    Table 2: Numerical Apertures (NA) for different types of objectives and magnifications

    Magnification Plan Achromat Plan Fluorite Plan Apochromat

    4 x 0.1 0.13 0.1610x 0.25 0.3 0.420x 0.4 0.5 0.7540x 0.65 0.75 0.960x 0.8 (dry) 0.9 (dry) 1.42 (oil immersion)100x 1.25 (oil) 1.3 (oil) 1.4 (oil)

    Fig. 13: Angular aper-ture of an objective.

    Box 2: How to set Khler illumination

    To align a microscope that can change the distance of the condenser to the area of the specimen for Khlerillumination you should:

    1. Focus with a 10x or higher magnifying objective on a specimen, so that you can see at least something ofinterest in focus. (If your condenser has a front lens please swing it in when using more then 10x magnify-ing objectives.)

    2. Close the field stop at the light exit.

    3. Move the condenser up or down to visualise the closed field stop. Now only a round central part of yourfield of view is illuminated.

    4. If the illumination is not in the centre the condenser has to be moved in the XY direction to centre it.

    5. Finely adjust the height of the condenser so that the edges of the field stop are in focus and the diffractioncolour at this edge of the field stop is blue green.

    6. Open the field stop to such an amount that the total field of view is illuminated (re-centring in xy directionmay be necessary).

    7. For best viewing contrast at brightfield close the aperture stop to an amount of 80 % of the objective nu-merical aperture.

    In practice, you can slowly close the aperture stop of your condenser while looking at a specimen. At the mo-ment when the first change in contrast occurs, the NA of the condenser is getting smaller then the NA of theobjective and this setting can be used. For documentation the condenser NA should be set according to thatof the objective. To visualise the aperture stop directly you can remove one eyepiece and see the aperturestop working as a normal diaphragm.

    rectly following the index for magnifica-tion (fig. 14), e.g. a UPlanFLN 60x/0.9 ob-

    jective produces a 60x magnificationwith a numerical aperture of 0.9. TheNumerical Aperture strongly differs de-pending on the type of objective or con-denser, i.e. the optical aberration correc-tion. Table 2 lists some typical numericalapertures for different objective magnifi-cations and corrections.

    To achieve higher NA than 0.95 for ob- jectives they have to be used with immer-sion media between front lens and speci-men. Oil or water is mostly used for thatpurpose. Because objectives have to bespecially designed for this, the type of im-mersion media is always mentioned on theobjective like on the UPlanFLN 60x/1.25Oil Iris objective. This objective needs oilas an immersion media and can not pro-duce good image quality without it.

    For special techniques like the TotalInternal Reflection Fluorescence Micros-copy (TIRFM), objectives are producedwith very high NA leading by the Olym-pus APO100x OHR with a NA of 1.65. Theresolution that can be achieved, particu-larly for transmitted light microscopy, isdepending on the correct light alignment

    of the microscope. Khler illumination isrecommended to produce equally dis-tributed transmitted light and ensure themicroscope reaches its full resolving po-tential (see box 2).

    What resolution can be reached with alight microscope?

    To make the subject more applicable,some resolution numbers shall be givenhere. Using a middle wavelength of 550nm, the Plan Achromat 4x provides a

    resolution of about 3.3 m, whereas thePlan Apochromat reaches about 2.1 m.The Plan Achromat 40x provides a reso-lution of 0.51 m and the Plan Apochro-mat of 0.37 m. The real-world limit for

    14 Basics of light M icroscopy & i Maging t he r esolving p ower

  • 8/8/2019 Basics of Light Microscopy

    17/56

    the resolution which can be reached witha Plan Apochromat 100x is often nothigher than R = 0.24 m.

    To give a comparison to other micro-scopes that do not work with visible light,the resolution reached nowadays in ascanning electron microscope is about R

    = 2.3 nm. In a transmission electron mi-croscope, structures down to a size of 0.2nm can be resolved. Scanning probe mi-croscopes even open the gates to theatomic, sub Angstrm dimensions andallow the detection of single atoms.

    What are airy disks?

    Every specimen detail that is illuminatedwithin a microscope creates a so-calleddiffraction pattern or Airy disk pattern.This is a distribution of a bright centralspot (the Airy disk or primary maximum),and the so called secondary maxima sep-arated by dark regions (minima or ringsof the Airy disk pattern; see fig. 15, 16)created by interference (More details:www.mic-d.com/ curriculum/lightand-color/diffraction.html). When two detailswithin the specimen are closely togetherwe can only see them separated if thetwo central spots are not too close toeach other and the Airy disks themselvesare not overlapping. This is what theRayleigh criterion describes. Good dis-tinction is still possible when one Airydisk just falls in the first minimum of theother (fig. 15).

    The smaller the Airy disks, the higherthe resolution in an image. Objectiveswhich have a higher numerical apertureproduce smaller Airy disks (fig. 16) fromthe same specimen detail than low NA objectives. But the better the resolutionin the xy direction, the less is the speci-men layer that is in sharp focus at thesame time (Depth of field), because theresolution also gets better in the z direc-tion. Like higher magnification, higher

    resolution always creates less depth of field. Also in most cases the increase of NA means that the objective gets closerto the specimen (less working distance)compared to an objective of lower NA butwith same magnification. Therefore,choosing the best objective for your ap-plication may not only depend on the re-solving power.

    Resolution in digital images is itimportant?

    The next step is to go from the opticalimage to the digital image. What hap-pens here? The real world conversionfrom an optical to a digital image worksvia the light sensitive elements of the

    the range of colour values a pixel canhave, and thus determines a kind of abrightness or colour resolution. Yet it isthe digital sampling which defines thespatial resolution in a digital image. Bothspatial and brightness resolutions givethe image the capability to reproducefine details that were present in the orig-inal image. The spatial resolution de-pends on the number of pixels in the dig-ital image. At first glance, the followingrule makes sense: the higher the numberof pixels within the same physical dimen-sions, the higher becomes the spatial res-olution. See the effect of different num-bers of pixels on the actual specimenstructure in fig. 17. The first image (175x 175) provides the image information asreasonably expected, whereas specimendetails will be lost with fewer pixels (44 x44). With even fewer, the specimen fea-

    tures are masked and not visible anymore. This effect is called pixel

    blocking.

    Table 3: Optical resolution and number of needed pixels. The number of pixels a 1/2 inch chip shouldhave to meet the Nyquist criterion (2 pixels per feature) and the optimum resolution (3 pixels perfeature). The number for LP/mm is given for the projection of the image on the CCD.

    Objective Magnification NA Resolution Lp/mm CCD resolution 1/2 CCD resolution 1/2 (m) (on CCD) Nyquist limit Necessary resolution

    2 pixel/lp 3 pixel/lp

    PlanApoN 2 0,08 4,19 119 1526 x 1145 2289 x 1717

    UPlanSApo 4 0,16 2,10 119 1526 x 1145 2289 x 1717UPlanSApo 10 0,4 0,84 119 1526 x 1145 2289 x 1717UPlanSApo 20 0,75 0,45 111 1420 x 1065 2131 x 1598UPlanSApo 40 0,9 0,37 67 858 x 644 1288 x 966UPlanSApo 100 1,4 0,24 42 534 x 401 801 x 601

    Fig. 14: What is what on an objective?Olympus: Manufacturer

    PlanApo: Plan: Flat field correction; Apo:Apochromatic;

    60x: Linear magnification1,42 Oil: Numerical Aperture (needs oil immer-

    sion) : Infinity corrected optic (can not be

    mixed with finite optics that belongsto the 160 mm optics)

    0.17: Cover slip correction. Needs a coverslip of 0.17mm thickness.

    Note: Take care of this parameter. There canalso be a 0 for no coverslip, a 1for 1mm thickness or a range of mm.Wrong usage of objectives will createfoggy images (spherical aberra-tion).

    FN 26.5: Field number 26.5 (When using anocular and tube that can provide a FNof 26.5 you may divide this numberwith the magnification to achieve thediameter in your field of view in mm.26.5/60 = 0,44 mm).

    CCD chips in digital cameras, for exam-ple. Or, a video camera sensor may pro-vide voltage signals that are read out anddigitised in special frame grabber cards.But what is of more interest here is theprinciple which lies behind the actual re-alisations. The optical image is continu-ous-tone, i.e. it has continuously varyingareas of shades and colour tones. Thecontinuous image has to be digitised andquantified; otherwise it cannot be dealtwith in a computer. To do so, the originalimage is first divided into small separateblocks, usually square shaped, which arecalled pixels. Next, each pixel is assigneda discrete brightness value. The first stepis called digital sampling, the secondpixel quantisation. Both convert the con-tinuous-tone optical image into a two-di-mensional pixel array: a digital image.

    Digital resolution what is it for?

    The pixel quantisation of theimage intensities dependson the bit depth or dy-namic range of theconverting sys-tem. The bitdepth definesthe number of grey levels or

    t he r esolving p ower Basics of light M icroscopy & i Maging 15

  • 8/8/2019 Basics of Light Microscopy

    18/56

    Is there an optimum digital resolution?

    Thus, the number of pixels per opticalimage area must not be too small. Butwhat exactly is the limit? There shouldntbe any information loss during the con-version from optical to digital. To guar-

    antee this, the digital spatial resolutionshould be equal or higher than the opti-cal resolution, i.e. the resolving power of the microscope. This requirement is for-mulated in the Nyquist theorem: Thesampling interval (i.e. the number of pix-els) must be equal to twice the highestspatial frequency present in the opticalimage. To say it in different words: Tocapture the smallest degree of detail, twopixels are collected for each feature. Forhigh resolution images, the Nyquist crite-rion is extended to 3 pixels per feature.

    To understand what the Nyquist crite-rion states, look at the representations infig. 18 and 19. The most critical featureto reproduce is the ideal periodic patternof a pair of black and white lines (lowerfigures). With a sampling interval of twopixels (fig. 18), the digital image (upperfigure) might or might not be able to re-solve the line pair pattern, depending onthe geometric alignment of specimen andcamera. Yet a sampling interval withthree pixels (fig. 19) resolves the line pairpattern under any given geometric align-ment. The digital image (upper figure) isalways able to display the line pair struc-ture.

    With real specimens, 2 pixels per fea-ture should be sufficient to resolve mostdetails. So now, we can answer some of the questions above. Yes, there is an opti-mum spatial digital resolution of two or

    three pixels per specimen feature. Theresolution should definitely not besmaller than this, otherwise informationwill be lost.

    Calculating an example

    A practical example will illustrate whichdigital resolution is desirable under whichcircumstances. The Nyquist criterion isexpressed in the following equation:

    R * M = 2 * pixel size (4)

    R is the optical resolution of the objec-tive; M is the resulting magnification atthe camera sensor. It is calculated by theobjective magnification multiplied by themagnification of the camera adapter.

    Assuming we work with a 10x Plan Apochromat having a numerical aper-ture (NA) = 0.4. The central wavelengthof the illuminating light is = 550 nm. Sothe optical resolution of the objective is R= 0.61* /NA = 0.839 m. Assuming fur-ther that the camera adapter magnifica-tion is 1x, so the resulting magnificationof objective and camera adaptor is M =10x. Now, the resolution of the objectivehas to be multiplied by a factor of 10 tocalculate the resolution at the camera:

    R * M = 0.839 m * 10 = 8.39 m.Thus, in this setup, we have a minimumdistance of 8.39 m at which the linepairs can still be resolved. These are 1 / 8.39 = 119 line pairs per millimetre.

    The pixel size is the size of the CCDchip divided by the number of pixels.

    A 1/2 inch chip has a size of 6.4 mm *4.8 mm. So the number of pixels a 1/2inch chip needs to meet the Nyquist cri-

    Box 3:

    Multiple Image Alignment (mia)

    Multiple image alignment is a software approachto combine several images into one panoramaview having high resolution at the same time.

    Here, the software takes control of the micro-scope, camera, motor stage, etc. All parametersare transferred to the imaging system through aremote interface. Using this data, the entire mi-croscope and camera setups can be controlledand calibrated by the software. After defining therequired image size and resolution, the user canexecute the following steps automatically with asingle mouse click:

    1. Calculation of the required number of imagesections and their relative positions

    2. Acquisition of the image sections includingstage movement, image acquisition and comput-ing the optimum overlap

    3. Seamless stitching of the image sections with

    sub-pixel accuracy by intelligent pattern recogni-tion within the overlap areas.

    4. The overlap areas are adjusted automaticallyfor differences in intensity

    5. Visualisation of the full view image.

    Fig. 15: Intensity profiles of the Airy disk patterns of one specimen detail and of two details a t differ-ent distances.

    16 Basics of light M icroscopy & i Maging t he r esolving p ower

  • 8/8/2019 Basics of Light Microscopy

    19/56

    Box 4: Using hardware to increase the resolution in fluorescence microscopy

    Several hardware components are available, to enable the acquisition of images that are not disturbed by outof focus blur. For example, grid projection, confocal pinhole detection and TIRFM.

    Since out of focus parts of a specimen produce blur in an image, there is the simple option of eliminating thisstray light from the image. With most confocal microscopes, a small pinhole is located in front of the sensorand only those light beams that are originally from the focus area can pass through, others are simply ab-sorbed. The resulting point image only contains information from the focus area. To create an image of morethen just one point of the focus area, a scanning process is needed.

    This scanning process can be performed with the help of spinning disks for an ordinary fluorescence micro-scope or by a scanning process with a laser beam in a confocal laser scanning microscope (cLSM) set-up.

    Each system produces different levels of improvement to the resolution. However, all of them need a profes-sional digital sensor system to display the images.

    TIRFM (Total Internal Reflection Fluorescent Microscopy) usesa completely different concept. With this method, a very thinlayer of the specimen (around 200 nm) is used to create theimage. Therefore, TIRFM is ideal to analyse e.g. single moleculeinteractions or membrane processes. To achieve this target, alight beam is directed within a critical angle towards the coverslip.

    Because of the higher refractive index of the cover slip com-pared to the specimen, total internal reflection occurs. Thismeans that almost no direct light enters the specimen but

    due to the physics of light a so called evanescent wave travels in the specimen direction. This wave is only

    strong enough to excite fluorochromes within the first few hundred nanometres close to the cover slip. Thefluorescent image is restricted to this small depth and cannot be driven into deeper areas of the specimen,and also does not contain out of focus blur from deeper areas. (More information about TIRF can be found atwww.olympusmicro.com/primer/techniques/fluorescence/tirf/tirfhome.html).

    Fig. 16: Airy disk patterns of different size as an example of theresolving power for low NA (left) and high NA (right) objectives.

    Fig. 17: Four representations of the same image, with different num-bers of pixels used. The numbers of pixels is written below each image.

    terion with 2 pixels per feature, is 1/(R* M) * chip size * 2 = 119 line pairs / mm* 6.4 mm *2 = 1526 pixels in horizontaldirection. If you want 3 pixels per linepair, the result is 2289 pixels. This calcu-lation can be followed through for differ-ent kind of objectives. Please check out,which numbers of pixels we need for a1/2 inch chip in table 3.

    What might be astonishing here is thefact, that the higher the magnification,the fewer pixels the chip of a CCD cam-era needs! Working with a 100x Plan

    Apochromat objective combined with an1/2 inch chip, we need just 800 x 600 pix-els to resolve digitally even the finest op-tically distinguished structure. Thehigher number of pixels of about 2300 x1700 is necessary only at lower magnifi-cations up to 10.

    Which camera to buy?

    The resolution of a CCD camera is clearlyone important criterion for its selection.The resolution should be optimally ad-

    justed to your main imaging objective.Optimum resolution depends on the ob-

    jective and the microscope magnificationyou usually work with. It should have a

    minimum number of pixels to not loseany optically achieved resolution as de-scribed above. But the number of pixelsalso should not be much higher, becausethe number of pixels is directly corre-lated with the image acquisition time.The gain in resolution is paid for by aslow acquisition process. The fastestframe rates of digital cameras workingat high resolution can go up to the 100milliseconds or even reach the secondrange, which can become a practical dis-advantage in daily work. Furthermore,

    unnecessary pixels obviously need thesame amount of storage capacity as nec-essary pixels. For example, a 24 bit truecolour image consisting of 2289 x 1717pixels has a file size of almost 12 MB, if

    there is no compression method applied.The slow frame rate and the file size are

    just two aspects which demonstrate, thatthe handling of high resolution imagesbecomes increasingly elaborate.

    High resolution over a wide fieldof view

    The xy-resolution desired is one aspectwhich makes an image high quality. An-other partially contradicting aspect isthat we usually want to see the largestpossible fraction of the sample in bestcase the whole object under investiga-

    tion. Here, the microscopes field of viewbecomes important. The field of view isgiven by a number the so called fieldnumber e.g. if the microscope isequipped for the field number 22 and a10x magnifying objective is in use, the di-agonal of the field of view (via eyepiecesand tube that supports the given FN) is22/10 = 2.2 mm. Using lower magnifyingobjectives will enable a larger field of view and using large field tubes and ocu-lars can enlarge the field number to 26.5(e.g. field number 26.5 and 4x objectiveallows a diagonal of 26.5/4 = 6.624 mm),but unfortunately low magnifying objec-

    t he r esolving p ower Basics of light M icroscopy & i Maging 17

  • 8/8/2019 Basics of Light Microscopy

    20/56

    tives can not reach the same maximumNA as high magnifying objectives. Evenwhen we use the best resolving lens for a4x objective, the N.A of this Apochromat(0.16) is still lower then the lowest re-solving 20x Achromat with a N.A. of 0.35.Having a lower NA produces a lower res-olution. In a number of applications, afield of view of several millimetres and aresolution on the micrometer or nano-metre scale is required simultaneously(fig. 20). How can we overcome this prob-lem, especially when we recognise thatthe CCD sensor of the camera reduces

    the field of view (monitor image) oncemore? Some image processing can offeran alternative. In the first step, these sys-tems automatically acquire individualimages at the predefined high resolution.In the next step the software performsintelligent pattern recognition togetherwith a plausibility check on the overlap-ping parts of the individual image sec-tions to align them all in one image withexcellent accuracy (better than a pixel).The computed result shows one com-bined image, maintaining the originalresolution. So you get an image which

    Box 5: Changing objectives but keeping the digital live image in focus

    Parfocal alignment.

    High class objectives are designed to be parfocal. That means even when you change from a 4x magnifying ob- jective to a 40x objective the structure under observation remains in focus. This is especially a need for imag-ing with automated microscopy. To use this feature at the microscope the following short guideline will help:

    We assume that the microscope in use is in proper Khler illumination setting, and the digital camera is con-nected via a camera adapter (c-mount) that allows focus alignment.

    (Some adapters are designed with a focusing screw; some can be fixed with screws in different distance posi-tions.)

    1. Use a high magnifying objective (40x or more) to get well recognisable detail of a specimen into focus(camera live image), with the help of the course and fine focus of the microscope frame.

    2. Change to a low magnifying objective (e.g. 4x), anddo not change the course or fine focus at the micro-scope, but, align the focus of the camera adapter until the camera live image shows a clear in focus image.

    3. Changing back to high magnification the image is still in focus you do not believe? Have a try.

    Fig. 18: Line pairpattern achievedwith 2 pixels perline pair. Pleaserefer to the textfor the descrip-tion.

    Fig. 19: Line pairpattern resolvedwith 3 pixels perline pair. Pleaserefer to the textfor the descrip-tion.

    Fig 20: Mr. Guido Lnd, Rieter AG, DepartmentWerkstoff-Technik DTTAM, Winterthur, Switzer-land, works on fibres using a microscope. Fortheir investigations they often have to glue thesingle images into an overview image. Previ-ously this was a considerable challenge. Now apiece of software takes over his job.

    has both high resolution and a large fieldof view (fig. 21). Please check box 3 forthe description of the image processingprocedure. Additionally, box 7 describesits sophisticated and extended followercalled Digital Virtual Microscopy.

    Physical limits and methods toovercome them

    As described before, the resolution of amicroscope objective is defined as thesmallest distance between two points ona specimen that can still be distinguishedas two separate entities. But several limi-tations influence the resolution. In thefollowing we cover influence of imageblurring and resolution depth capacity.

    Convolution and deconvolution

    Stray light from out of focus areas aboveor below the focal plane (e.g. in fluores-cence microscopy) causes glare, distor-tion and blurriness within the acquisition(fig. 22.a). These image artefacts areknown as convolution and they limitones ability to assess images quickly, as

    well as make more extensive evaluations.There are several sometimes sophisti-cated hardware approaches in use whichallow reducing or even avoiding out of focus blur at the first place when acquir-ing the images. Please see the descrip-tion in box 4. A different approach is theso-called deconvolution which is a recog-nised mathematical method for eliminat-ing these image artefacts after image ac-

    g

    u i d o

    l n

    d

    18 Basics of light M icroscopy & i Maging t he r esolving p ower

  • 8/8/2019 Basics of Light Microscopy

    21/56

    quisition. It is called deconvolution. If thepoint spread function (PSF) is known, itis possible to deconvolute the image. Thismeans the convolution is mathematicallyreversed, resulting in a reconstruction of the original object. The resulting imageis much sharper, with less noise and athigher resolution (fig. 22.b).

    What exactly is the point spreadfunction?

    The point spread function is the image of a point source of light from the specimenprojected by the microscope objectiveonto the intermediate image plane, i.e.the point spread function is representedby the Airy disk pattern (fig. 15, 16).Mathematically, the point spread func-tion is the Fourier transform of the opti-

    cal transfer function (OTF), which is ingeneral a measurement of the micro-scopes ability to transfer contrast fromthe specimen to the intermediate imageplane at a specific resolution. PSF (orOTF) of an individual objective or a lenssystem depends on numerical aperture,objective design, illumination wave-length, and the contrast mode (e.g.brightfield, phase, DIC).

    The three-dimensional point spread functionThe microscope imaging system spreads

    the image of a point source of light fromthe specimen not only in two dimensions,but the point appears widened into athree-dimensional contour. Thus, moregenerally speaking, the PSF of a systemis the three dimensional diffraction pat-tern generated by an ideal point sourceof light. The three-dimensional shapes of

    Fig. 21: The analysis of non-metallic inclusionsaccording to DIN, ASTM and JIS requires theprocessing of areas up to 1000 mm 2 while the

    sulphide and oxide inclusions to be analysed arethemselves are less than micrometres wide. Nocamera is available offering such a large CCDsensor. The image processing solution stitchessingle overlapped images to give one highresolution image together.

    Fig. 22: Via deconvolution artefacts can be computed out of fluorescence images. a) These artefactsare caused by the stray light from non-focused areas above and below the focus level. These phenom-ena, referred to as convolution, result in glare, distortion and blurriness. b) Deconvolution is a recog-nised mathematical procedure for eliminating such artefacts. The resulting image displayed is sharperwith less noise and thus at higher resolution. This is also advantageous for more extensive analyses.

    a)

    b)

    the PSF create the so-called out-of-focus

    blur, which reduces the resolution andcontrast in images, e.g. in fluorescencemicroscopy. This blurring or haze comesfrom sections within the specimen whichare outside of the focal plane of the ac-tual image. So, an image from any focalplane of the specimen contains blurredlight from points located in that planemixed together with blurred light frompoints originating in other focal planes(fig. 23).

    What is deconvolution used for?

    When the PSF of a system is known, itcan be used to remove the blurringpresent in the images. This is what theso-called deconvolution does: It is an im-age processing technique for removing

    out-of-focus blur from images. The de-

    convolution algorithm works on a stackof images, which are optical sectionsthrough the specimen and are recordedalong the z-axis of the microscope. Thealgorithm calculates the three-dimen-sional PSF of the system and reverses theblurring present in the image. In this re-spect, deconvolution attempts to recon-struct the specimen from the blurred im-age (fig. 23).

    Depth of focus versus depth of field

    Two terms depth of focus and depth of field are often used to describe thesame optical performance, the amount of specimen depth structures that can beseen in focus at the same time. However,

    t he r esolving p ower Basics of light M icroscopy & i Maging 19

  • 8/8/2019 Basics of Light Microscopy

    22/56

    only the term Depth of Field should beused for this feature. As we will point outlater the term Depth of Focus is neededfor a different optical feature.

    An optical system, such as the eye,which focuses light, will generally pro-duce a clear image at a particular dis-tance from the optical components. In acamera, the ideal situation is where aclear image is formed on the chip, and inthe eye it is where a clear image isformed on the retina. For the eye, thishappens when the length of the eye

    matches its optical power, and if a dis-tant object is in focus when the eye is re-laxed. If there is a difference betweenthe power and length in such a situation,then the image that is formed on the ret-ina will be very slightly out of focus. How-ever, such a discrepancy may be smallenough that it is not noticed, and thusthere is a small amount of slop in thesystem such that a range of focus is con-sidered to be acceptable. This range istermed the depth of focus of the eye.Looking at it the other way round, the

    eye might be precisely in focus for a par-ticular distance for example an objectone metre away. However, because of theslop in the system, other objects 90 cmand 110 cm away may also be seenclearly. In front of, and behind, the pre-cise focal distance there is a range where

    vision is clear, and this is termed thedepth of field.Now look at the physical understand-

    ing of depth of focus in microscopy andthe depth of field, respectively. Depth of field fi in a microscope is the area infront of and behind the specimen thatwill be in acceptable focus. It can be de-fined by the distance from the nearestobject plane in focus to that of the far-thest plane also simultaneously in focus.This value describes the range of dis-tance along the optical axis in which thespecimen can move without the imageappearing to lose sharpness. Mathemati-cally fi is directly proportional to

    fi ~ /2*NA 2 (5)

    fi = depth of field = wavelength of light (emission)NA = numerical aperture

    fi obviously depends on the resolutionof the microscope. Large lenses withshort focal length and high magnifica-tions will have a very short depth of field.Small lenses with long focal length andlow magnifications will be much better.Depth of focus fo in a microscope is thedistance above and below the imageplane over which the image appears infocus. Its the extent of the region aroundthe image plane in which the image willappear to be sharp. fo is directly pro-portional to

    fo ~ M 2 /NA fo = depth of focusM = magnification

    NA = numerical aperture

    fo refers to the image space and de-pends strongly on the magnification M,but also on changes in numerical aper-ture NA.

    As a take home message high reso-lution will create relative low depth of field, high magnifications will create ahigher depth of focus this is also whythe procedure for parfocal alignment willwork.

    ParfocalityThere is a well-known difficulty in con-ventional light microscopy which refersto the limited depth of focus: If you focus

    Fig. 23: Stray light originating from areas above and below the focal plane results in glare, distortionand blurriness (convolution) especially in fluorescence microscopy and histology. Deconvolution is arecognised mathematical method for correcting these artefacts. The degree to which an image is dis-torted is described by what is known as the point spread function (PSF). Once this is known, it becomespossible to deconvolute the image. This means that the convolution of the image is mathematicallyreversed and the original contours of the specimen are reconstructed. The greater the precision withwhich the degree of distortion i.e., PSF is known, the better the result. What is this result? Asharper and noise-free version of the image at higher resolution with enhanced image quality.

    Fig. 24: The software extracts the focused areas from the component images of an image series andreassembles them into one infinitely sharp image. The example shown here is the resulting imagecomputed automatically using 25 images of a Bembidion tetracolum sample.

    20 Basics of light M icroscopy & i Maging t he r esolving p ower

  • 8/8/2019 Basics of Light Microscopy

    23/56

    an image with one objective, e.g. 4x, itmight be out of focus when you change toa another magnification, e.g. 40x. Theterm parfocality describes the situationwhen the structures remain in focus.Parfocality is a characteristic of objec-tives. Please read box 5 how to assureparfocality with high class objectives.

    Automated sharp images

    Let us come back to restrictions arisingfrom the limited depth of field. The betterthe lateral resolution, the smaller yourdepth of field will be. The problem is infact physical, and it cannot be circum-vented by any adjustments to the opticalsystem. Todays standard light micro-scopes allow objects to be viewed with amaximum magnification of about 1000x.The depth of field is then reduced toabout 1 m. Only in this area the speci-men is perfectly imaged. This physicallimitation of inadequate depth of field is afamiliar problem in microscope acquisi-tions. In the metal-processing industry,when analysing or evaluating two-dimen-sional metallographical objects such as,e.g., a section sample, a section press isgenerally used in order to obtain an exact

    orthogonal alignment in relation to theoptical axis of the microscope. The sam-ple to be analysed can then be acquiredtotally focused in a single acquisition.

    However, objects which have a dis-tinctly three-dimensional structure or forinvestigations of, e.g., 3-D wear, atgreater magnifications no satisfactoryoverview image is obtainable via stereoor reflected-light microscope. The micro-scope can only be focused onto limitedareas of the object. Due to the limiteddepth of field, it is actually impossible to

    obtain a sharp acquisition of the entireimage field. These physical restrictionscan only be transcended via digital im-age analysis. The normally wholly-bind-ing laws of physics are side-stepped.

    What happens is that an image series isacquired at varying focus levels. Thenspecial software algorithms are appliedto all the images of the image series anddistinguish between sharply-focused andunfocused image segments in each im-age. The sharp image segments of theimages are then pieced back together toform a single totally-focused image of the

    entire sample (fig. 24). Furthermore,measuring height differences and gener-ating three-dimensional images becomesfeasible (fig. 25). For further detail pleasesee box 6.

    Fig. 25: This image shows how far money can go:This Singaporean coin detail was captured using aColorView digital CCD camera and processedusing the realignment, extended focus and 3-Dimaging software modules. Capture and auto-matically align multiple component images into ahigh-resolution composite using the MIA module.

    Extract the sharpest details within the componentimages and reassemble into one single imagehaving infinite depth of focus with the moduleEFI. Finally, create incredibly realistic views usingheight and texture information attained throughthe perspective functions of the 3D module.

    Box 7: Virtual microscopy

    Light microscopy is one of the classic imagingtechniques used in medical education and for rou-tine procedures of pathology, histology, physiol-ogy and embryology. Pathology makes use of mi-croscopes for diagnostic investigation of tissuesamples to determine abnormal changes. Digitisa-tion has resulted in significant progress in thefield of microscopy. However, digital technologyup until now has presented some decisive limita-tions. One problem is that the field of view of anycamera is limited for any given magnification. It isusually not possible to have a complete overviewof the tissue specimen with just one image at aresolution that allows further analysis. Digital vir-tual microscopy moves beyond this barrier.

    Virtual microscopy is the digital equivalent to con-ventional microscopy. Instead of viewing a speci-men through the eyepiece of the microscope andevaluating it, a virtual image of the entire slide (avirtual slide) with perfect image quality is dis-played on the monitor. The individual system com-ponents (microscope, motor stage, PC, software)are all optimally inter-coordinated and offer speed,precision and reliability of use. The Olympus solu-tion .slide scans the entire slide at the resolutionrequired. Integrated focus routines make sure theimage is always in sharp focus. The single imagesacquired are automatically stitched together intoa large montage (the virtual slide). The entirevirtual slide can be viewed onscreen. Detail im-age segments can be selected and zoomed in orout, the equivalent to working with an actualglass slide under the microscope with the sameefficiency. With an internet connection, this pro-cedure can be done from anywhere in the world.In addition, users have all the advantages of dig-ital image processing at their fingertips, includingstructured web archiving of images, analysis re-

    sults and documentation.

    Box 6: Extended Focal Imaging (efi)

    A lack of depth of field in microscope images is an old and familiar problem. The microscopes own depth offield is only capable of focusing a limited height range at the same time. The remaining parts of the image arethen blurry. Electronic image processing points the way out of this dead-end street.

    For a motorised microscope equipped with a motorised stage the whole process can be automated. The soft-ware takes control of the microscope, camera, motor stage, etc. All parameters are transferred to the imagingsystem through a remote interface. Using these data, the entire microscope and camera set-up can be controlledand calibrated by the software. After having defined the total number of images, and the maximum and mini-mum height of the stage, the user can execute the following steps automatically with a single mouse click.

    1. In the first step the user defines the number of images in the focus series. Using a microscope with a mo-tor stage the user has to define the maximum and the minimum lift of the microscope stage as well as thetotal number of individual images he intends to acquire.

    2. The defined image series at varying focus levels is acquired.

    3. Next the composite image of pixel-by-pixel precision will be generated from these images. This is done byextracting the respective focused image areas of each separate image and assembling these into a fo-cused composite image. Most of these solutions take into consideration the typical (due to construction)shift of the optical axis that occurs when focusing stereoscopic microscopes.

    4. The Extended focal image with practically limitless depth of field is computed.

    5. Now the user is able to generate a height map which permits users to reconstruct three-dimensionalviews e.g. to measure height differences.

    t he r esolving p ower Basics of light M icroscopy & i Maging 21

  • 8/8/2019 Basics of Light Microscopy

    24/56

    Contrast and Microscopy

    22 Basics of light M icroscopy & i Maging c ontrast and M icroscopy

  • 8/8/2019 Basics of Light Microscopy

    25/56

    All cats are grey in the dark

    Why are cats grey in the dark? Here, theterm contrast comes into play. Contrastrefers to the difference of intensities orcolours within an image. Details withinan image need intensity or colour differ-

    ences to be recognised from the adjacentsurroundings and overall background.Let us first restrict our view to a greys-cale level image, like image fig. 26. Whilewatching the image try to estimate thenumber of grey scales that you can dis-tinguish and keep that number inmind.

    Let us now compare this number withanother example you enter an officeshop in order to buy a selection of cardsamples in different shades of grey. All of the cards fall down by accident and arescattered over the floor. Now you have toreplace them in the correct grey level or-der how much can you differentiatenow?

    Surprisingly, we are only able to dif-ferentiate approximately 5060 grey lev-els that means that already the 8 bitimage on your monitor with 256 greyscales offers greater possible differentia-tion than we can discriminate with ourown eyes. The card samples in variousshades of grey need to a have an approx-imate difference in contrast level of about2 % in order for us to recognise them asdifferent. However, if we look at thenumber of image areas (pixels) that rep-resent a discrete intensity (grey level orpixel intensity) within an intensity distri-bution of the image we can understandand handle contrast and brightness vari-ations more easily. Intensity or grey leveldistributions are referred to as histo-grams, see box 8 for further explanation.These histograms allow us to optimisecamera and microscope settings so thatall the intensity values that are availablewithin the specimen are acquired

    (fig. 27). If we do not study the availableintensity values at the initial stage beforeimage acquisition, they are archived and

    can not be visualised in additionalprocessing steps.

    The familiar view brightfield contrast

    In brightfield transmitted microscopy thecontrast of the specimen is mainly pro-

    duced by the different absorption levelsof light, either due to staining or by pig-ments that are specimen inherent (am-plitude objects). With a histological spec-imen for example, the staining procedureitself can vary the contrast levels that areavailable for imaging (fig. 27). Neverthe-less, the choice of appropriate opticalequipment and correct illumination set-tings is vital for the best contrast.

    During our explanation about Khleralignment (see box 2) we described thatat the end of this procedure the aperturestop should be closed to approximately80 % of the numerical aperture (NA) of the objective. This setting is to achievethe best contrast setting for our eyes.Further reduction of the aperture stopwill introduce an artificial appearanceand low resolution to the image.

    For documentation purposes however,the aperture stop can be set to the samelevel as the NA of the objective becausethe camera sensors in use are capable tohandle much more contrast levels than

    our eyes. For specimens lacking naturaldifferences in internal absorption of light,like living cells (phase objects; fig. 28) or

    Fig. 26: Vitamin C crystals observed in polarisedlight.

    Fig. 27: Histological staining. Cells obtained after a transbronchial needle application. Image andhistogram show that the overall intensity and contrast have been optimised.

    c ontrast and M icroscopy Basics of light M icroscopy & i Maging 23

  • 8/8/2019 Basics of Light Microscopy

    26/56

    reflected microscopy specimens withoutsignificant three dimensional relief struc-tures, the acquired image has flat con-trast. To better visualise the existing im-age features (fig. 28, original image),subsequent digital contrast optimisationprocedures may be applied (fig. 28, im-proved image). But to visualise more de-

    tails in those specimens optical contrastmethods must be used.

    Like stars in the sky Darkfield Contrast

    Dust in the air is easily visible when alight beam is travelling through the air ina darkened room. The visibility is only

    achieved because the dust particles dif-fract and/or reflect the light and this lightis now travelling in all directions.

    Therefore we can see light originatingfrom the particle in front of a dark back-ground even when the particle itself istoo small to be resolved or does not showan appropriate contrast under daylightconditions. This phenomenon is also usedin darkfield (or dark ground) microscopy.Light is directed to the specimen in a waythat no direct light enters the objective. If there is no light scattering particle theimage is dark, if there is something thatdiffracts or reflects the light, those scat-tered beams can enter the objective andare visible as brig