3-d imaging using optical coherence radar

141
UNIVERSITY OF KENT AT CANTERBURY ████ 3-D IMAGING USING OPTICAL COHERENCE RADAR a thesis submitted to The University of Kent at Canterbury in the subject of physics for the degree of doctor of philosophy. By Mauritius Seeger December 1997

Upload: mauritius-seeger

Post on 27-Oct-2014

103 views

Category:

Documents


2 download

DESCRIPTION

In this thesis we explore the application of optical Coherence Radar to the study of surface topography and transparent multilayer structures. In particular, we explore the potential of Coherence Radar to obtain tomographic images or sections of the human retina in vivo.Coherence Radar is an interferometric method which relies on the use of lowcoherence illumination to measure the absolute position of reflective layers.The measurement of surface topography and in particular, the study and analysis of hypervelocity impacts using Coherence Radar is investigated. We show that the system can deliver topographic measurements with a depth accuracy of about 2 micro meters and that it is ideally suited for measurements of rough surfaces containing large discontinuities and steep walls where sub-micron accuracy is not required.We describe how Coherence Radar can be used to measure the position of reflecting interfaces in objects which contain many partially transmitting and reflecting layers and demonstrate its application to the assessment of impact damage in a Hubble Space Telescope solar cell.The measurement of the human retina is investigated. We successfully obtain longitudinal images of post-mortem fundus tissue and show that Coherence Radar can potentially offer an attractive alternative to beam scanning low-coherence systems.Finally, we describe a modied Coherence Radar system implementing balanced detection by using two CCD line-scan cameras and a Mach-Zehnder type interferometer. We show that this technique can signicantly reduce the required dynamic range of the analogue to digital converter in the presence of a large number of highly reflective layers.

TRANSCRIPT

Page 1: 3-D Imaging using Optical Coherence Radar

UNIVERSITY OF KENT

AT CANTERBURY █ █ █ █

3-D IMAGING USING OPTICAL COHERENCE RADAR

a thesis submitted to

The University of Kent at Canterbury

in the subject of physics

for the degree

of doctor of philosophy.

ByMauritius SeegerDecember 1997

Page 2: 3-D Imaging using Optical Coherence Radar

c© Copyright 1997

by

Mauritius Seeger

ii

Page 3: 3-D Imaging using Optical Coherence Radar

Contents

List of Tables vi

List of Figures ix

Abstract x

Acknowledgements xii

1 Three Dimensional Imaging Techniques 1

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Non-Optical 3-D Measurements . . . . . . . . . . . . . . . . . . . . . . . 1

1.2.1 Stylus Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.2 NMR and CT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2.3 Ultrasound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Optical Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.1 Stereo Pair Imaging . . . . . . . . . . . . . . . . . . . . . . . . . 31.3.2 Confocal Microscopy . . . . . . . . . . . . . . . . . . . . . . . . . 41.3.3 Fringe Projection Techniques . . . . . . . . . . . . . . . . . . . . 7

1.4 Optical Interferometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4.1 Two Wavelength . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.4.2 Electronic Speckle Pattern Interferometry . . . . . . . . . . . . . 81.4.3 Low-Coherence Interferometry . . . . . . . . . . . . . . . . . . . 81.4.4 Channelled Spectrum . . . . . . . . . . . . . . . . . . . . . . . . 11

1.5 Interference Detection using a CCD Detector . . . . . . . . . . . . . . . 111.5.1 Automated Phase Measurement Microscopy . . . . . . . . . . . . 131.5.2 CCD Based Low-Coherence Interferometry . . . . . . . . . . . . 16

1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 Surface Topography using Coherence Radar 20

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.2 Principles of Coherence Radar . . . . . . . . . . . . . . . . . . . . . . . . 20

2.2.1 Phase Stepping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.2.2 Surface Finding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

2.3 Experimental System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.3.1 Michelson Interferometer . . . . . . . . . . . . . . . . . . . . . . 242.3.2 Imaging Optics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.3.3 Translation Devices . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.4 Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

iii

Page 4: 3-D Imaging using Optical Coherence Radar

2.5 Surface Profile Measurements . . . . . . . . . . . . . . . . . . . . . . . . 272.6 Noise Thresholding and Surface Interpolation . . . . . . . . . . . . . . . 302.7 Evaluation of Noise Thresholding . . . . . . . . . . . . . . . . . . . . . . 312.8 Analysis of Hypervelocity Impact Craters . . . . . . . . . . . . . . . . . 332.9 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

2.9.1 Phase Stepping Error . . . . . . . . . . . . . . . . . . . . . . . . 422.9.2 PZT Hysteresis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442.9.3 Vibrational Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . 472.9.4 Image Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

2.10 Accuracy of Surface Location . . . . . . . . . . . . . . . . . . . . . . . . 492.11 Empirical Evaluation of Accuracy . . . . . . . . . . . . . . . . . . . . . . 512.12 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

3 Imaging of Multiple Reflecting Layers 54

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543.2 Theoretical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.2.1 Resolving Multiple Layers . . . . . . . . . . . . . . . . . . . . . . 553.2.2 Simulation of Signal Strength from a Multilayer Object . . . . . 573.2.3 Effect of the Object Medium on the Measurement . . . . . . . . 62

3.3 Experimental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653.3.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653.3.2 Investigation of 20 Glass Plates . . . . . . . . . . . . . . . . . . . 653.3.3 Solar Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4 In Vitro Imaging of the Human Ocular Fundus 74

4.1 Introduction: Properties of the Human Fundus . . . . . . . . . . . . . . 744.1.1 The Human Eye . . . . . . . . . . . . . . . . . . . . . . . . . . . 744.1.2 Human Fundus Sample and Tissue Preparation . . . . . . . . . . 764.1.3 Optical Properties of the Eye . . . . . . . . . . . . . . . . . . . . 784.1.4 Light Scattering in Biological Tissue . . . . . . . . . . . . . . . . 794.1.5 Illumination Wavelength . . . . . . . . . . . . . . . . . . . . . . . 79

4.2 Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804.3 Experimental . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

4.3.1 Coherence Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . 814.3.2 Coherence Profile Broadening through Dispersion . . . . . . . . . 814.3.3 Fundus Imaging of a Model Eye . . . . . . . . . . . . . . . . . . 844.3.4 In Vitro Examination of Fundus Layers . . . . . . . . . . . . . . 86

4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 874.4.1 Data Acquisition and Processing Speed . . . . . . . . . . . . . . 874.4.2 Speed Optimisation . . . . . . . . . . . . . . . . . . . . . . . . . 914.4.3 OCT versus CCD Based Interferometry . . . . . . . . . . . . . . 93

4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

5 Balanced Detection 95

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.2 Balanced Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955.3 Dynamic Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

iv

Page 5: 3-D Imaging using Optical Coherence Radar

5.4 Experimental System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995.5 Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

6 Conclusion 109

6.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1106.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

A Digital Imaging System 112

A.0.1 CCD Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112A.1 CCD Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

A.1.1 The TM520 Video Camera . . . . . . . . . . . . . . . . . . . . . 113A.1.2 The Thomson Linescan Camera . . . . . . . . . . . . . . . . . . . 113

A.2 Frame Grabber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113A.2.1 The Bit Flow Frame Grabbers . . . . . . . . . . . . . . . . . . . 113

A.3 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114A.4 Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

B Publications Arising from this Thesis 119

B.1 Refereed Journal Papers . . . . . . . . . . . . . . . . . . . . . . . . . . . 119B.2 Conference Papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Bibliography 121

v

Page 6: 3-D Imaging using Optical Coherence Radar

List of Tables

2.1 Values of approximate positional error based on interference amplitudeerror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.1 Comparison of the current imaging hardware (see also appendix A) withcommercially available high performance components . . . . . . . . . . . 92

4.2 Potential acquisition speed for longitudinal and transverse sections whenusing 10 intensity samples (n=10) . . . . . . . . . . . . . . . . . . . . . 93

A.1 Digital imaging system performance determined experimentally at highgain setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

vi

Page 7: 3-D Imaging using Optical Coherence Radar

List of Figures

1.1 Stereo pair imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 Confocal system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Formation of Moire fringes . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Michelson interferometer . . . . . . . . . . . . . . . . . . . . . . . . . . . 91.5 Interference obtained with a low-coherence source . . . . . . . . . . . . . 101.6 Transverse scanning in a fiberised low-coherence reflectometer . . . . . . 121.7 Configuration of interference microscopes . . . . . . . . . . . . . . . . . 151.8 Overview of three dimensional measurement techniques . . . . . . . . . 182.1 Coherence Radar experimental arrangement . . . . . . . . . . . . . . . . 212.2 The Coherence Radar experimental system . . . . . . . . . . . . . . . . 232.3 Coherence function of the super-luminescent diode at a bias current of

139mA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.4 Telecentric telescope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.5 Flow chart of data acquisition and hardware control . . . . . . . . . . . 282.6 5 pence coin (the rulings shown are 0.5mm) . . . . . . . . . . . . . . . . 292.7 Surface topography of a 5 pence coin; depth is indicated by colour (scale

in µm) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.8 Profile cross-section at position indicated by dashed line in figure 2.7 . 302.9 Surface of hemispherical crater, depth indicated by colour (microns) . . 322.10 Surface profile of hemispherical crater after thresholding. The central

spike is a remaining rogue point. . . . . . . . . . . . . . . . . . . . . . . 322.11 Surface with missing points interpolated . . . . . . . . . . . . . . . . . . 332.12 Three dimensional representation of crater 1 (head-on impact) . . . . . 352.13 Surface topography of crater 1 (head-on impact) . . . . . . . . . . . . . 352.14 Cross section showing surface profile of crater 1 (position indicated in

figure 2.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362.15 Photograph of crater 2 resulting from a head-on impact (the rulings

shown are 0.5mm) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362.16 Surface topography of crater 2 (head-on impact) - compare to photograph

in figure 2.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372.17 Surface topography of crater 3 (impact 70◦ to normal) . . . . . . . . . . 372.18 Surface topography of crater 4 (impact 70◦ to normal) . . . . . . . . . . 382.19 Zernike fit of crater 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402.20 Zernike fit of crater 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402.21 Zernike fit of crater 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412.22 Zernike fit of crater 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412.23 Numerical simulation of low-coherence interferogram . . . . . . . . . . . 432.24 Error in demodulating Gaussian interference amplitude . . . . . . . . . 43

vii

Page 8: 3-D Imaging using Optical Coherence Radar

2.25 Hysteresis of the PZT material . . . . . . . . . . . . . . . . . . . . . . . 452.26 Amplitude error as a result of PZT hysteresis . . . . . . . . . . . . . . . 462.27 Numerical simulation of interference in the presence of mechanical vibra-

tions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462.28 Distribution of amplitude error . . . . . . . . . . . . . . . . . . . . . . . 472.29 Relationship between image noise (σccd) and the resultant amplitude error

(σA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482.30 Peak search: relationship between amplitude error (Ea) and position

error (Ed) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502.31 Interference amplitude vs. depth along a line of 512 pixels, showing

measured surface position . . . . . . . . . . . . . . . . . . . . . . . . . . 522.32 RMS deviation of surface position from line of best fit . . . . . . . . . . 523.1 Interfaces separated by ∆d = 11λ . . . . . . . . . . . . . . . . . . . . . . 553.2 Interfaces separated by ∆d = 11λ+ λ/8 . . . . . . . . . . . . . . . . . . 563.3 Interfaces separated by ∆d = 11λ+ λ/4 . . . . . . . . . . . . . . . . . . 563.4 Model of multilayer object composed of many identical glass plates . . . 583.5 Interference amplitude versus interface number in a stack of 100 glass

plates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603.6 Dynamic range required to detect the interference signal from interface j

in a stack of 100 glass slides (200 interfaces) . . . . . . . . . . . . . . . . 613.7 Focal plane shift caused by refractive object medium . . . . . . . . . . . 633.8 Interferogram of first 8 glass plates in a stack of 20 . . . . . . . . . . . . 663.9 Average of interference amplitude versus depth (the amplitude is calcu-

lated as an average of 10 neighbouring pixels) . . . . . . . . . . . . . . . 673.10 Log of maximum interference amplitude, Ae(j), versus interface number, j 683.11 Extraction of a cross-sectional image from a set of transverse images . . 703.12 Image of the Hubble Space Telescope solar cell showing the position of

the extracted cross section relative to the impact site . . . . . . . . . . . 713.13 Tomographic image of solar cell (geometric distance is given as µm in

parenthesis) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723.14 Schematic view of solar cell cross-section . . . . . . . . . . . . . . . . . . 734.1 Anatomy of the human eye (refractive index shown in parentheses) . . . 754.2 Schematic representation of the fundus layers . . . . . . . . . . . . . . . 764.3 Cross section of the fundus tissue container . . . . . . . . . . . . . . . . 774.4 Stainless steel sample container . . . . . . . . . . . . . . . . . . . . . . . 774.5 Fundus tissue in the sample container (scale graduation = 0.5 mm) . . . 784.6 False path interpretation due to photon scattering in a diffusive medium 804.7 Plot of interference amplitude for dispersive and non-dispersive paths . 824.8 Experimental arrangement to avoid strong back-reflections at the air-

glass boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 834.9 Orientation of longitudinal and transversal sections relative to the eye . 844.10 Experimental arrangement for in vivo imaging using Coherence Radar . 854.11 Experimental arrangement to image a model eye using corrective optics 854.12 Longitudinal section of the model fundus . . . . . . . . . . . . . . . . . 864.13 Post mortem fundus tissue showing the approximate position of longitu-

dinal sections obtained using Coherence Radar . . . . . . . . . . . . . . 874.14 Longitudinal section (1) of post mortem fundus tissue . . . . . . . . . . 884.15 Longitudinal section (2) of post mortem fundus tissue . . . . . . . . . . 89

viii

Page 9: 3-D Imaging using Optical Coherence Radar

4.16 Operations performed by Coherence Radar . . . . . . . . . . . . . . . . 905.1 Mach-Zehnder interferometer . . . . . . . . . . . . . . . . . . . . . . . . 965.2 Experimental Arrangement implementing a balanced Coherence Radar

technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005.3 Data Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1015.4 Intensity variation along the CCD line-scan sensor . . . . . . . . . . . . 1025.5 Remaining intensity variation after subtraction of signals from CCD line-

scan 1 and 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035.6 Anatomy of step structure . . . . . . . . . . . . . . . . . . . . . . . . . . 1035.7 Interference produced by a flat mirror . . . . . . . . . . . . . . . . . . . 1045.8 Interference amplitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1055.9 Interference amplitude and surface profile (white line) of periodic step . 1065.10 Interference amplitude peaks produced by air-glass reflections in a stack

of 20 glass plates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107A.1 Noise distribution at maximum gain . . . . . . . . . . . . . . . . . . . . 114A.2 Noise distribution at minimum gain . . . . . . . . . . . . . . . . . . . . 115A.3 Experimental configuration for the measurement of CCD camera sensitivity116A.4 Sensitivity calibration (exposure time 1/60 second) . . . . . . . . . . . . 117

ix

Page 10: 3-D Imaging using Optical Coherence Radar

Abstract

x

Page 11: 3-D Imaging using Optical Coherence Radar

In this thesis we explore the application of optical Coherence Radar to the study ofsurface topography and transparent multilayer structures. In particular, we explore thepotential of Coherence Radar to obtain tomographic images or sections of the humanretina in vivo.

Coherence Radar is an interferometric method which relies on the use of low-coherence illumination to measure the absolute position of reflective layers.

The measurement of surface topography and in particular, the study and analysis ofhypervelocity impacts using Coherence Radar is investigated. We show that the systemcan deliver topographic measurements with a depth accuracy of about 2 µm and thatit is ideally suited for measurements of rough surfaces containing large discontinuitiesand steep walls where sub-micron accuracy is not required.

We describe how Coherence Radar can be used to measure the position of reflectinginterfaces in objects which contain many partially transmitting and reflecting layersand demonstrate its application to the assessment of impact damage in a Hubble SpaceTelescope solar cell.

The measurement of the human retina is investigated. We successfully obtain lon-gitudinal images of post-mortem fundus tissue and show that Coherence Radar canpotentially offer an attractive alternative to beam scanning low-coherence systems.

Finally, we describe a modified Coherence Radar system implementing balanceddetection by using two CCD line-scan cameras and a Mach-Zehnder type interferometer.We show that this technique can significantly reduce the required dynamic range of theanalogue to digital converter in the presence of a large number of highly reflective layers.

xi

Page 12: 3-D Imaging using Optical Coherence Radar

Acknowledgements

xii

Page 13: 3-D Imaging using Optical Coherence Radar

First of all, I would like to thank my supervisor, Dr. Chris Solomon, who hasgiven me invaluable guidance. I couldn’t have wished for someone more supportive andunderstanding. Chris, thanks for all the help!

I would also like to thank all my friends in the Physics Department for their moralsupport and useful advice. In particular Dr. Adrian Podoleanu, George Dobre and Dr.Pippa Salmon with whom I shared a laboratory and who have been extremely helpfuland great fun.

Finally, I would like to thank my parents for their help and support.

xiii

Page 14: 3-D Imaging using Optical Coherence Radar

Chapter 1

Three Dimensional Imaging

Techniques

1.1 Introduction

This thesis investigates the use of CCD based low-coherence interferometry to obtainthree dimensional images of opaque objects, multilayer structures and biological ma-terial. In particular, it aims to assess the feasibility of applying this technique to thestudy of the human retina in vivo.

In order to place this method in context, a brief review of other three dimensionalimaging techniques is provided. In the following sections, we first discuss non-opticalmeasurement techniques such as nuclear magnetic resonance imaging and computedtomography (which offer the ability to penetrate opaque substances) and stylus methodswhich allow profiling with atomic scale resolution. We also review optical methodswhich allow surface topography measurements and sectioning of translucent scatteringmaterial - particularly, a number of interferometric methods which have been appliedto the high resolution measurement of surface topographies. Finally, we present anumber of volume and surface imaging techniques which are based on low-coherence or’white-light’ interferometry and outline the advantages of using CCD based detectionin conjunction with these methods.

1.2 Non-Optical 3-D Measurements

The measurement techniques presented here can be broadly categorised into those thatrequire mechanical contact, such as stylus scanning methods used for surface analysisand those which are essentially non-contact methods such as NMR, CT and Ultrasoundwhich are mainly used in medicine.

1.2.1 Stylus Scanning

Scanning Probe Microscope

The scanning probe microscope (SPM) [1] relies on scanning a very fine probe tip alongthe surface of a sample. Electrical or magnetic interactions between the probe tipelectrode and the sample allow the measurement of electrical conductivity, electronicstructure, atomic structure and topography. By mounting the probe tip on a piezzo

1

Page 15: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 2

electric transducer, it can be scanned across the sample in three dimensions. A feedbackmechanism adjusts the probe tip height so as to maintain a constant voltage betweenthe tip and the sample. Thus, if the sample is displaced horizontally with respect tothe stylus, the probe tip follows the profile of the surface. A measure of the resultantvertical tip displacement and the horizontal position of the sample, can then be used toconstruct a profile of the sample.

SPM offers resolution at an atomic scale (∼ 0.1 nm) over an area of ≈1 µm2. Themain applications of the SPM include high resolution surface profiling, spectroscopy,electro-chemistry, nanofabrication and lithography. Since the development of the firstSPM, the scanning tunnelling microscope (STM) and a variety of other non-contactscanning probe microscopes, such as the atomic force microscope (AFM), have beendeveloped [2]. The AFM is especially suited for the inspection of optical surfaces whichare non-conducting [3], and although it does not have atomic resolution it offers asuperior scanning range of up to 200 µm.

Stylus Contact Scanning

The most widely used instrument for measuring surface topography is the stylus profiler.In contrast to the scanning probe microscope, the stylus makes physical contact withthe sample surface. The vertical position of the stylus is then a measure of the surfaceheight at the point of contact between the stylus and the sample. The stylus is loadedwith a small force to ensure contact with the surface as the sample or the stylus is movedhorizontally at a constant speed. The resultant vertical displacement of the stylus isconverted to an electrical signal using a linear variable differential transformer (LVDT)[2]. This height information together with the horizontal position of the stylus withrespect to the sample is then stored and processed digitally to produce a surface profile.Lateral and height resolutions of 0.5µm and 0.1nm respectively over a lateral rangeof 40mm have been demonstrated [2]. The lateral resolution is limited mainly by theshape and tip size of the stylus, but is also determined by features of the sample, sincethe resulting surface profile is given by the convolution of the sample surface and thestylus tip. The vertical, or height resolution is limited by noise in the stylus positionsensor (LVDT).

The stylus instrument can achieve very high depth resolution over a large surface areaand its transverse resolution is superior to most optical methods. However, mechanicalcontact between the stylus and the sample can cause scratches in soft surfaces and thusreduces the range of possible applications. In addition, the measurement procedure isslow when compared to optical techniques.

1.2.2 NMR and CT

Nuclear magnetic resonance (NMR) or magnetic resonance imaging (MRI) and x-raycomputed tomography (CT) have been extensively used in medicine due to their abilityto image sections of optically opaque media. Both conventional CT and MRI imagesoffer an in plane resolution of ≈ 1 mm (FWHM) [4, 5]. Unlike CT, NMR/MRI alsoconveys information about chemical composition and (blood) flow velocity. Studies ofthe eye have been performed using CT and NMR/MRI techniques [6]. However, therelatively low resolution of both methods has limited the applications to the detectionof foreign bodies in the eye and to the elucidation of pathological disease mechanism [7].

Page 16: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 3

In addition, there are risks associated with CT due to the use of x-rays. The strength ofCT and NMR/MRI methods lies in their ability to penetrate optically opaque materialover a large distance, allowing the visualisation of hidden structures.

1.2.3 Ultrasound

Ultrasound (US) imaging operates on the principle of determining the round-trip timeof sound emitted by a transducer and reflected from a target. Short pulses of highfrequency sound are emitted and detected via a transducer so that the echo delayresulting from the target distance can be determined. Imaging can be performed byscanning a directional transducer over the area of interest. Larger frequencies enablea better spatial resolution but reduce the penetration depth. In practice the usablefrequency range of ultrasound is limited to 3− 10MHz [4].

Blessing et. al. [8] have demonstrated the ability of US to measure the surfaceroughness [8] of machine tool surfaces with roughness of ≥ 0.1µm rms. US is routine inmedical diagnosis, especially for foetal examination during pregnancy. Studies of the eyehave been performed using 3-D imaging US [9] and colour Doppler Ultrasound [10]. Themaximum resolution of conventional ocular ultrasound is limited to ≈ 150 µm in typicalclinical instruments [7]. Recent developments in high frequency ultrasound, however,have allowed resolutions of ≈ 20µm, at the cost of reduced penetration depth (4mm)[7]. A severe drawback of US examinations of the eye is the requirement to maintainphysical contact between the patients eye and the transducer via a saline liquid or gel.However, US offers a convenient method for volume imaging in optically opaque media,and requires only a small transducer in contact with the area of interest.

1.3 Optical Techniques

Using optical techniques for 3-D imaging offers a number of advantages which may besummarised as follows:

• Non-invasive and non-contact measurement procedure

• Refractive and reflective optics are easily designed and widely available

• Low health risk, since visible light is non-ionising

1.3.1 Stereo Pair Imaging

Stereo pair analysis is a technique for obtaining 3-D information from 2-D image pairs.The implementation of the stereo pair method is, in principle, independent of the meansby which the stereo images are obtained. Therefore, one may fundamentally considerthis as a data processing technique rather than a measurement technique.

Three dimensional information can be recovered from two stereo images by a methodof pair matching, based on triangulation. A topographic map of the object surface canbe derived provided the stereo images show the same portion of the object surface fromtwo distinct angles. The height of a feature, z, is derived by measuring the disparity, d,which is the difference in position of an object feature between the left and right image.They are related by[11]

Page 17: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 4

B

FF

z

Shadows

d

Object

Camera

1

Camera

2

Image 2

Image 1

Figure 1.1: Stereo pair imaging

d = BF1

z(1.1)

where B and F are the baseline (distance between the two cameras) and focal lengthrespectively. Figure 1.1 shows the production of two stereo images and the resultantdisparity between the location of features in images 1 and 2. To automate the disparitymeasurement an algorithm is used to identify, or match, two corresponding featuresin the images. The probability of finding a correct set of matching features decreaseswith larger baselines, since the range of disparities is increased. However, given acorrect match, the feature height, z, is determined more accurately using larger baselines.Thus, there is a tradeoff between resolution and reliability of the measurements. Toovercome the problem of incorrect feature matching, a multiple baseline algorithm hasbeen developed which reconstructs a surface from multiple stereo pairs [11].

Since stereo images may be obtained by a number of means, the size of the objectsmay vary from the very large such as a city photographed by a passing airplane to smallmolecular structures imaged using an electron microscope. However, the stereo pairtechnique suffers from inherent shadowing of features as indicated in figure 1.1. Thismakes the method unsuitable for surfaces containing deep holes or steep walls.

1.3.2 Confocal Microscopy

Confocal microscopy is a powerful method widely used for non-destructive optical sec-tioning of thick translucent material or for imaging of opaque surfaces and has found

Page 18: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 5

Source

Beamsplitter

Lens

Detector

Pinhole

za

Focalplane

Figure 1.2: Confocal system

considerable use in biomedical and materials science applications. The superb section-ing capability of confocal microscopes is achieved by the use of a point detector. Asshown in figure 1.2, light originating outside the focal plane is largely rejected by theconfocal aperture and does not contribute to the image. The axial position, z, of thesection can then be adjusted by altering the plane of the confocal aperture (which isconjugate to the section plane).

Since a point detector must be used in order to maintain confocality, either thespecimen or the optical arrangement have to be raster scanned to form 2-D images.In conventional microscopes, this is best implemented by the use of a translation stagewhich moves the object. Greater speeds can be obtained by using a (usually laser) beam-scanning arrangement or a rotating Nipkow disk which is fitted with many pinholes andwhich is also used in Tandem Scanning Microscopes (TSM) [12].

The depth sectioning property of a confocal arrangement, such as that depicted infigure 1.2, is determined by the intensity response, I(u), of a point object. Paraxial the-ory [13] predicts the depth point spread function of a confocal arrangement (figure 1.2).For a point object1, the intensity response, I(u), is given by:

I(u) =

[

sin(u/4)

u/4

]4

(1.2)

where u is the normalised axial distance which is related to the real axial distance,z, and angle α (as shown in figure 1.2) by:

u =8π

λz sin2(α/2) (1.3)

The sectioning capability or depth resolution of a confocal system may be definedas the full width half maximum (FWHM) of the intensity variation along the z axis,I(z), as given by equation 1.2 and is primarily determined by the angle α as shown in

1For a plane the intensity response is given by I(u) =[

sin(u/2)u/2

]2

Page 19: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 6

Figure 1.3: Formation of Moire fringes

figure 1.2.This depth discrimination allows the study of opaque surface structures and translu-

cent volume samples. Computer analysis can greatly enhance the interpretation and vi-sualisation of confocal images. If consecutive sections of a sample are recorded digitallyand stored on a computer, re-projections of the resulting volume data can be performed.Also, by locating the peak intensity along the optic axis, surface profiles with accuraciesof < 1µm [12] can be obtained.

Applications in material sciences include [14]: examination of surface fracture, litho-graphic processes, semiconductor materials, integrated circuits, dielectric films, fibre-reinforced composites, power cable insulation, minerals, soils and optical fibres.

Confocal imaging is especially well suited to investigation of structures embeddedin diffusive media, due to its capacity to reject scattered light [15]. These include in

vivo biomedical applications. Jester et. al. [16] have demonstrated live cellular imagingof structures in: corneal, kidney, liver, adrenal, thyroid, epididymis, and muscle tissueand in connective tissue of rabbits and rats.

One area of special interest is the application of confocal optics in ophthalmicmedicine. Traditional fundus cameras are unable to image sections of retinal tissueand images usually suffer from poor contrast due to scattered light. Using the principleof confocal microscopy, confocal Scanning Laser Ophthalmoscopes (cSLO’s) [17] havebeen successfully used to achieve superior optical sectioning of the human fundus invivo, with improved contrast due to rejection of scatter. However, the axial resolutionis severely limited by the pupil diameter (equation 1.2) and ocular aberrations. EarlycSLO prototypes achieved a depth resolution of ≈ 300µm, [17, 18] while commercial sys-tems can now obtain resolutions as low as 30µm [19] when observing the human fundusin vivo. Consequently cSLO’s have been able to facilitate early diagnosis of geneticallydetermined disease and age related macular degeneration [20].

Page 20: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 7

1.3.3 Fringe Projection Techniques

Moire Topography

Moire fringes are formed if periodic patterns such as two gratings overlap. This isillustrated in figure 1.3. This well known phenomenon has been used to provide contourfringes of 3-D objects. Depending on the size of the object, contour fringes may beproduced by illuminating an object of interest via a periodic line grating and observingit from a different angle through the same grating. Larger objects can be investigatedif grating lines are projected on the surface and the object is then viewed through asmaller grating in the focal plane of the image. With the advent of electronic imageacquisition, new techniques have emerged which superimpose a virtual second gratingelectronically (by electronic filtering) once the image of the object has been captured.

Moire fringes form topographic lines on the object and thus allow evaluation of thesurface height. Since this process was initially designed for subjective interpretation, theMoire method is poorly suited to allow automated analysis. However, Idesawa et. al.[21] have demonstrated an automated process using a scanning Moire method. Becauseof the relative ease with which gratings may be projected on objects, especially of largersize, Moire topography is best suited for the measurement of large objects (such as acar) where a sub millimetre resolution is not required.

Fourier Transform Profilometry

Fourier transform profilometry (FTP) is a surface topography method based on fringeprojection and overcomes many of the difficulties encountered with Moire fringe anal-ysis. The optical arrangement of FTP is similar to that used in Moire topographyprojections, except that a second grating is not required to produce fringes. Instead,the 3-D object shape is extracted automatically from a digitised image of the objectwith projected fringes using an algorithm operating in the spatial frequency domain[22, 23]. A resolution far superior to that of Moire topography is achieved but at thecost of limited range.

1.4 Optical Interferometry

1.4.1 Two Wavelength

Two wavelength interferometry (TWI) provides a means of using light of two wavelengthto obtain an interferogram identical to one produced by a much longer wavelength. Thisallows the range over which conventional phase measuring interferometry is unambigu-ous to be extended without sacrificing the height measurement precision [24]. TWI hasthus found applications in those areas where conventional interferometry provides aninadequate measurement range. Two wavelength interferometry has been implementedfor point to point distance measurements as well as for surface topography.

The addition of two interferograms recorded with an illumination wavelength of λ1

and λ2 result in a pattern equivalent to an interferogram recorded using an equivalentwavelength, λeq, such that[25]

λeq =λ1λ2

|λ1 − λ2|(1.4)

Page 21: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 8

A similar effect may also be obtained by illuminating an interferometer with light oftwo distinct wavelengths simultaneously such that beating fringes with λeq are formed.Since the effect of TWI is to increase the wavelength of the illuminating light to an equiv-alent wavelength, interferograms can be used to determine the topography of smoothsurfaces by applying conventional phase measurement and phase unwrapping techniques(see section 1.5.1 on page 14). Although TWI increases the measurement range of in-terferometers its accuracy is ultimately limited by the wavelength stability of the twosources.

1.4.2 Electronic Speckle Pattern Interferometry

Electronic speckle pattern interferometry (ESPI) is the modern equivalent of specklepattern correlation interferometry (SPCI), since an electronic image sensor (such as aCCD) is used instead of film. The main application of ESPI lies in revealing dynamicdisplacements of optically rough surfaces in real time. However, it is also possible toobtain surface topographies of static objects using ESPI [26].

ESPI [27] uses a two-beam interferometer and coherent, monochromatic laser illu-mination. The surface sample is placed in the object arm of the interferometer and theresultant speckle pattern is imaged onto a CCD detector via a lens. When the surfaceof interest is optically rough, the interference phase will vary randomly from point topoint, making it impossible to measure the phase variations. However, if two images ofthe speckle field are acquired, one of which shows the object in a deformed state (suchas would be caused by vibrations), a subtraction of the two images will reveal fringesdue to a correlation between the two speckle fields. Areas of maximum intensity areobserved where the phase of the interference is an integer multiple of 2π and areas ofminimum intensity are observed where the phase is an integer multiple of π. Since theobserved fringe pattern is analogous to that observed in a conventional interferometer,a measure of the surface deformation can be obtained by using a variety of standardphase stepping and phase unwrapping techniques (see also section 1.5.1 on page 14).

1.4.3 Low-Coherence Interferometry

Low-coherence interferometry (LCI) offers absolute distance measurement at microm-eter scale resolution over a virtually unlimited range. LCI differs from conventionalinterferometry through the type of source which is used to illuminate the system. In-stead of a monochromatic laser, an incoherent source emitting a band of wavelengths, isused. Since the electric field vibrations emitted by this type of source are not correlatedin time, interference is usually not observed. Interference, however, can be producedif the light is split and recombined so that parts of the same wave, emitted at thesame time, are superimposed. In a Michelson interferometer this condition is satisfiedif the optical path difference (OPD) between the two interferometer arms is within thecoherence length of the source.

Let us illustrate this principle by considering a Michelson interferometer such asthat shown in figure 1.4. A graph of typical intensity variations caused by changes inthe OPD and measured at the detector is shown in figure2 1.5. The position of mirror

2This graph is representative of most low-coherence sources and was experimentally measured byusing a high power super-luminescent diode at λ = 830nm. However, the width and shape of thisfunction may vary depending on the particular source.

Page 22: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 9

Detector

Mirror 1

Mirror 2

Beamsplitter

Light Source

Collimating Lens

z

Figure 1.4: Michelson interferometer

2 along the optic axis may be changed so that the OPD between the two interferometerarms is zero. The amplitude of the intensity fluctuations then reaches a maximum asshown in figure 1.5. At large OPD’s the amplitude of the fringes decays to zero.

Low-coherence interferometry relies on this property to make absolute distance mea-surements. If mirror 1 in figure 1.4 is replaced with an object of unknown location (alongthe optic axis), the position of mirror 2 may be changed until interference of maximumamplitude is observed. Since the OPD is zero in this case, the location of the unknownsurface can be deduced from the position of mirror 2 (z). Thus, low-coherence allowsconvenient point to point measurements of absolute position. The most fundamentaldifference between low-coherence and conventional interferometry lies in the need forpath length scanning (i.e. displacement of the reference mirror or object of interest).

Low-coherence methods have been extensively used to measure distances in a varietyof applications such as point to point ranging [28], surface topography [29, 30] and tomo-graphic imaging [31, 32]. This section concentrates specifically on those low-coherencetechniques which employ a single photo-detector to measure the interference. Low-coherence techniques such as Coherence Radar [29, 33] which use a CCD sensor instead,and which are the main subject of this thesis, are discussed in detail in section 1.5.2.

When low-coherence interferometry (LCI) is applied to the measurement of dis-tances, it is sometimes referred to as low-coherence reflectometry (LCR), since it deter-mines the path travelled by light after reflection from objects of unknown position. LCRis also widely associated with techniques that use a single photo-detector. These meth-ods are typically implemented by using a low coherence Michelson or Mach-Zehnderinterferometer, and measure the positions of a reflecting object surface or structure

Page 23: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 10

-40 -30 -20 -10 0 10 20 30

0.0

0.5

1.0In

tens

ity (a

rbitr

ary

units

)

OPD(λ)

Figure 1.5: Interference obtained with a low-coherence source

embedded in a volume of (scattering) material with a resolution of ≈ 1µm.As detection is performed using a single photo detector, transverse imaging is achieved

by the use of beam or object scanning. This allows the additional implementation ofconfocal optics to further enhance the sectioning capability as well as the use of balanceddetection [34, 35] to allow the detection of very weak signals. Some LCR systems areimplemented in fibre, in which case the fibre aperture can conveniently act as a confocalaperture.

If the reference mirror or the object of interest is scanned along the optic axis at aconstant speed, a measure of the strength of interference can be obtained by demod-ulating the detected photo current at the Doppler frequency [28]. If the optical pathremains static, a small phase modulation at frequency fc can be introduced in the refer-ence arm to allow heterodyne detection of the interference signal at multiples of fc [36].Alternatively, the signal can be low-pass filtered at a bandwidth equal to the spread offrequencies produced by scanning the beam across the sample object [37].

A schematic diagram of a Michelson fibre based interferometer is shown in figure 1.6.Two possible methods for transverse image formation can be seen. Figure 1.6a depicts abeam scanning arrangement, in which two orthogonal mirrors deflect the focused beamin the transverse (x-y) direction. If a raster scan is performed the interference signalcan be displayed as a 2-D image of the section plane. This section is formed in the planefrom which the returned light has to travel a similar optical path to that in the referencearm. Adjusting either the reference mirror position or the object in the axial direction(z) changes the position of the (coherent) plane of interest. Figure 1.6b illustrates analternative method for transversal (x-y) scanning, which can be achieved by mounting

Page 24: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 11

the object on a x-y translation stage. Swanson et. al. [7] distinguish between OpticalCoherence Tomography (OCT), a method which employs transverse scanning along twoaxes (x, y or z) in order to achieve 2-D tomographic images and Optical CoherenceDomain Reflectometry (OCDR) where scanning is performed only along one axis.

OCT has gained much popularity in the field of ophthalmics, where it complementsor even replaces the confocal scanning laser ophthalmoscope (cSLO) (see also 1.3.2 onpage 4) for in vivo studies of the human fundus/retina [7, 38–40]. For these investigationsOCT provides higher resolution than available with any other existing technique [7].

OCT has been successfully applied to investigations of the cornea [7], corneal thick-ness [41], eye length [35], in vivo frog tadpole anatomy [42] and, for inert materials, toinvestigations of ceramic defects [43] and moulded composites fibre structure [31]. Thescanning speed is of particular importance for in vivo applications as involuntary move-ments of the subject, especially when observing the eye, can lead to substantial errorsdue to unwanted displacements. However, a tandem interferometer configuration hasbeen successfully used to compensated for axial movements during in vivo eye length[44] and corneal thickness measurements [41].

1.4.4 Channelled Spectrum

The Channelled Spectrum technique can be used to obtain profiles of surfaces and imagemulti-layer structures. It can, in principle, be classed as non-scanning low-coherenceinterferometry, since it facilitates the measurement of the optical path difference (OPD)without the need for axial scanning and also requires the use of a low-coherence source.

When monitoring the spectral properties of light returned from a Michelson or Mach-Zender type interferometer with the aid of a dispersive element such as a diffractiongrating, a series of peaks can be observed in the spectrum of the source. The spatialfrequency and phase of these peaks is related to the OPD [45] in the interferometer.By sensing the line-shape with a one dimensional CCD detector an automated analysis,such as a spatial Fourier transform, can be performed and the OPD can be inferred.

The accuracy and resolution are limited by the number of grating lines and theresolution of the CCD sensor. Channelled spectrum methods do not require OPDscanning like conventional low-coherence interferometers. However, they suffer from arestricted depth range and cannot achieve more than one-dimensional image resolutionin the transverse direction.

Methods based on this technique have been successfully applied to the measurementof surface profiles (with an axial resolution of 0.3µm over a range of 70µm) [46, 47] andstructures in multi-layer samples (thickness resolution of > 2nm and a maximum rangeof 100µm)[48].

1.5 Interference Detection using a CCD Detector

A number of methods discussed in this section originate from standard interferencemicroscopy as developed by Linnik [49] and Mirau. Most commercial microscopes canbe modified by the addition of a Mirau microscope objective to yield an improved depthdiscrimination. The sample can then be seen with multi-coloured white-light fringesacross it. A subjective approximate evaluation of the sample shape can be obtained byobserving the straightness and frequency of the fringes. Since the advent of low-costdigital imaging and processing equipment, it has been possible to automate the analysis

Page 25: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 12

X-Y Beam Scanner

Reference Surface

50/50

Fibre Coupler

Low-Coherence

Source

Reference Surface

50/50

Fibre Coupler

Low-Coherence

Source

Sample

Sample

X-Y Translation Stage

Signal Processing Storage

Photodetector

Signal Processing Storage

Photodetector

Lens

Lens

Lens

Lens

a

b

Figure 1.6: Transverse scanning in a fiberised low-coherence reflectometer

Page 26: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 13

of these visible fringe patterns in order to gain an objective measure of the sampletopography [30, 50].

In these automated profilers, imaging is achieved primarily by the use of a ChargedCoupled Device (CCD) sensor. A CCD essentially replaces the eye and allows theobjective measurement of intensity and lateral distance across the plane of the sample.The sensor surface is divided into a square grid of picture elements or pixels each ofwhich delivers a charge proportional to the number of photons striking it during theexposure time of the sensor. The photon energy is converted to an accumulated staticcharge by the silicon layer of the CCD. At the end of each exposure the charges can beshifted along the rows and columns of the pixels to produce an analog output signal.See also appendix A on page 112. Using a suitable analog to digital converter, the imagecan be stored and analysed on a computer.

A distinction has to be made between methods employing low-coherence interferom-etry and those that, although they employ low-coherence sources, analyse the fringesbased on the principle of conventional direct phase-measurement. Although the ac-curacy of conventional interferometry is high, its range may be limited to λ/2 whenobserving discontinuous surfaces. Low-coherence interferometry on the other hand of-fers a means to absolute distance measurement of both rough and optically smoothsurfaces over an almost unlimited range (see also section 1.4.3).

A distinction between conventional and low-coherence interferometry can thus bemade according to the way in which the interference signal is processed. Direct phasemeasurement techniques require virtually no axial scanning of the object or referencemirror. The surface topography can be calculated from the phase distribution acrossthe surface. This phase unwrapping process takes into account the phase variationsfrom one pixel to another. Low-coherence interferometry on the other hand allows theabsolute position of a surface to be measured at each pixel individually. In order toachieve this however, the object has to be translated though the entire depth range ofinterest.

1.5.1 Automated Phase Measurement Microscopy

Microscopes traditionally illuminate their samples with an extended white-light (i.e.low spatial and temporal coherence) source, such as a filament or discharge lamp. Byintroducing colour filters in the illumination path, the coherence length of this light canbe increased sufficiently to allow conventional phase measurements over a large depthrange. Because white-light illumination reduces unwanted interference between reflec-tions from optical surfaces lying outside the range of interest, this type of illuminationhas largely been maintained in automated phase measurement microscopes.

Three fundamental interferometric configurations have been used in interferencemicroscopy. Figure 1.7 shows Michelson, Mirau and Linnik configurations [51]. Thebeam-splitter placement is the primary limiting factor in determining the minimumsample-to-microscope-objective distance. This distance in turn determines the max-imum magnification of the objective. Values of objective magnification for all threeinterferometric configurations are shown in figure 1.7.

The objective-to-sample distance limitation is avoided by the Linnik arrangement,since the beam-splitter is located before the objective. The drawbacks of this arrange-ment are the increased back reflection from lens interfaces in the objective and the needto use two identical objective lenses to obtain perfect path matching. Also, because the

Page 27: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 14

common optical path is less than in other configurations, the measurements are moreprone to noise induced by mechanical vibrations. Although Mirau objectives do notsuffer these drawbacks and offer a higher magnification than is possible with Michelsonobjectives, they can introduce severe aberrations in wide aperture systems [52].

In order to produce a surface topography or profile measurement, the interferencepattern recorded by the CCD camera must be interpreted. This is a two stage pro-cess consisting of phase-stepping and phase-unwrapping. Phase-stepping computes thephase of the interference based on 3 or more images of phase shifted fringe patterns(interferograms). Phase-unwrapping then determines the true phase from the modulo2π phase image produced in the previous step.

Many phase stepping techniques exist, and for this type of interferometer the mostcommon method, is temporal phase shift interferometry [53, 54]. A phase shift canbe induced either by moving the reference surface or the object over a small distance(≈ λ/2). By capturing a sequence of phase shifted interferograms, the original phasedistribution across the sample object can be computed. A number of algorithms to cal-culate this phase distribution have been developed. They are usually named accordingto the number of phase shifted interferograms they require, such as three-step, four-step,five-step, and multi-step algorithm [26, 54]. To illustrate the general principle of thesealgorithms, the three step method is described here. The intensity distribution I(x, y),formed by the interference of two coherent light beams, can be described as

I(x, y) = a(x, y) + b(x, y) cos[ϕ(x, y)] (1.5)

where a(x, y) is the background illumination, b(x, y) is the fringe modulation andϕ(x, y) is the modulo 2π phase corresponding to the height of the sample surface.

In the three-step technique the phase ϕ(x, y) is calculated based on three intensitydistributions, I1, I2 and I3, captured at a phase shift of 0, 2π

3 and 4π3 respectively, such

that

I1(x, y) = a(x, y) + b(x, y) cos [ϕ(x, y)] (1.6)

I2(x, y) = a(x, y) + b(x, y) cos

[

ϕ(x, y) +2π

3

]

(1.7)

I3(x, y) = a(x, y) + b(x, y) cos

[

ϕ(x, y) +4π

3

]

(1.8)

The modulo 2π phase distribution, ϕ(x, y), can then be computed using [26, 55]

ϕ(x, y) = tan−1

[ √3(I2 − I3)

2I1 − I2 − I3

]

(1.9)

The phase stepping algorithm described by equations 1.5-1.9 results in a modulo2π phase map. Before the surface height of the sample can be calculated, any 2πdiscontinuities must be removed. This is the process of phase unwrapping. Providedthe sample surface slope is such that the largest phase change between adjacent pixelsis smaller than π the phase discontinuities can be removed by adding or subtractingmultiples of 2π until the phase difference between adjacent pixels is less that π. Anumber of more sophisticated phase unwrapping methods exist [56] including thosethat can unwrap the phase of discontinuous surfaces [57]. Once the phase has been

Page 28: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 15

Reference Surface

Beamsplitter

Test Surface

Microscope Objective

(10X, 20X, 40X)

Mirau Interferometer

Microscope Objective

(1.5X, 2.5X, 5X)

Reference SurfaceBeamsplitter

Test Surface

Michelson Interferometer

Reference Surface

Microscope Objectives

(100X, 200X)

Beamsplitter

Test Surface

Linnik Interferometer

Figure 1.7: Configuration of interference microscopes

Page 29: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 16

unwrapped, the topography of the sample surface, h(x, y), is given by:

h(x, y) =λ

4πϕ(x, y) (1.10)

1.5.2 CCD Based Low-Coherence Interferometry

Conventional automated phase measurement interferometers suffer severe drawbacksif the surface of interest is discontinuous and contains steps larger than λ/2, becausephase-unwrapping is then very difficult or impossible. Low-coherence or ’white-light’ in-terferometry has recently emerged as an attractive alternative, and can be implementedconveniently in a Mirau, Linnik or Michelson interferometer utilising a CCD sensor.

Subjective evaluation of white-light zero-order fringes has been applied for a longtime to the inspection of discontinuous step surfaces and thin films. In 1982 Bala-subramanian [58] patented the first test surface measurement system to automate thedetection of zero-order fringes. This was achieved by computer based CCD image anal-ysis of the interference in a Twyman-Green interferometer.

The principal difference between this type of automated profiling and the conven-tional phase detection systems discussed in section 1.5.1, are to be found in the processof interferogram analysis. Instead of the two step process of phase stepping and phaseunwrapping, low-coherence profilometry simply requires the location of an interferencemaximum (see also section 1.4.3).

In a low-coherence Michelson, for example, interference will be at a maximum onlyif the path length of the two interferometer arms are matched such that the opticalpath difference (OPD) between them is zero. If one of the mirrors in this interferometeris replaced with a surface of interest, there will be a distribution of OPD’s betweendifferent parts of the surface and the plane reference mirror. By superimposing thelight reflected from the mirror and the sample surface on a CCD sensor, the resultantinterference for each part of the surface can be measured. If the measurement is repeatedwhile the sample surface is moved along the optic axis, an interference maximum will beobserved at every pixel at some point along the displacement process. The maximumfinding process then determines at which object displacement this interference peakoccurs so that a topographic map of the surface can be constructed.

Compared to conventional phase interferometry, the low-coherence process requiresan accurate long range translation stage as well as an increased image storage andprocessing capability due to the added axial scanning which is performed. Although thisslows the acquisition process, the advantages which low-coherence interferometry offers,such as long range and absolute distance measurement, outweigh this disadvantage forcertain applications.

By implementing automated low-coherence interferometry in a Linnik interferencemicroscope system, Davidson et al [50, 59] have demonstrated an increased axial sec-tioning capability and lateral resolution as compared to traditional microscopes andconfocal scanning laser microscopes.

In 1990 Kino et. al. [52] presented a method based on a Mirau interferometer whichrecovers the interference visibility by filtering the signal in the frequency domain, usinga fast Fourier transform (FFT) technique. Their method is able to recover the phase aswell as the visibility of the interference fringes, but does not implement a surface findingalgorithm. Subsequently they presented a similar method using a Hilbert transform [60]to significantly increase processing speed.

Page 30: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 17

In 1992 Dresel et. al.[29, 33] introduced a method based on a Michelson interferom-eter which measures rough surfaces over a large transverse range (≈ 1cm) and in deepholes. The Coherence Radar arrangement, on which much of the work in this thesis isbased, is shown in figure 2.1 on page 21. This system is illuminated with a spatiallycoherent source (spatially filtered light), which simplifies alignment and increases thevisibility of the interference fringes. A collimated beam allows illumination withoutshadowing and thus makes this method ideal for applications involving the inspectionof deep holes. The amplitude of the interference fringes is recovered by the use of aphase stepping algorithm, which records three interferograms with a phase shift of 2π/3between them. The surface position is then determined by a simple maximum findingalgorithm. Due to speckle effects, the surface height resolution is limited to the rmsroughness of the surfaces when examining optically rough samples.

Recent developments have centred mainly around the development of faster andmore accurate data processing schemes. By using Fourier transforms and Sub-Nyquistsampling of the interference data in the axial direction, de Groot et. al. [30, 61, 62]demonstrated increased accuracy (≈ 0.5nm) and a reduction of acquisition time for Mi-rau based profile measurements. The Mirau microscope, is now commercially availablefrom a number of companies3.

1.6 Summary

The non-optical, non-contact methods such as nuclear magnetic resonance (NMR), com-puted tomography (CT) and Ultrasound (US) are invaluable tools for medical in vivo

diagnosis since they penetrate (opaque) tissue over large distances and offer a resolutionsufficient to image most structures of interest. Stylus profiling systems and in particu-lar the scanning probe microscope (SPM) are capable of measuring three dimensionalsurface profiles with extremely high resolution and therefore are best suited for indus-trial type applications which require the inspection of very small surface features downto atomic scale. Optical methods allow non-contact measurements and interferometrictechniques in particular offer very high resolution. They are well suited for profilinglarge or fragile surface structures at high speed. Low-coherence interferometry such asoptical coherence tomography (OCT) has been successfully used to investigate biologicalmaterial. Due to its aperture independent depth resolution OCT has found considerableapplication to the study of the human eye (cornea and retina) in vivo.

Figure 1.8 classifies the techniques discussed in this chapter by their ability to mea-sure the volume structure of translucent objects or the topography of surfaces. Also, inorder to gain an overview of their performance, they are arranged according to the reso-lution they deliver. The distinction between volume imaging and surface topography isprimarily introduced here since it reflects the fundamental limitation of some techniquesin locating more than one surface along the ’line-of-sight’. Surface topography measure-ments can be presented as a function z(x, y) such that there is only one unique surfaceheight value, z, for each transverse coordinate x,y. Volume imaging methods do nothave this limitation and are, in principle, able to resolve the height or depth of severalfeatures (z1, z2...zn) at each transverse position (x,y). As indicated in figure 1.8 three

3Wyko Corporation, Tucson, ArizonaZygo Corporation, Laurel Brook Road, Middlefield, Connecticut 06455Phase-Shift Technology, 3480 E. Britannian, Suite 110, Tucson, Arizona 85706

Page 31: 3-D Imaging using Optical Coherence Radar

CHAPTER

1.

THREE

DIM

ENSIO

NALIM

AGIN

GTECHNIQ

UES

18

US

CT

NMR

OCT

Two-Wavelength Channeled

Spectrum

CCD based Low-Coherence

Interferometry

ESPI

Stylus

Scanning

mm-nm

nm-atomic

Surface Topography

Volume Imaging

Fringe Projection

Stereo Pair Imaging

Automated Interference

Microscopy

Confocal

Microscopy

Resolution:

cm-mm

Figu

re1.8:

Overv

iewof

three

dim

ension

almeasu

rementtech

niques

Page 32: 3-D Imaging using Optical Coherence Radar

CHAPTER 1. THREE DIMENSIONAL IMAGING TECHNIQUES 19

dimensional volume imaging can be achieved by non-optical methods such as US, NMRand CT, by optical methods such as OCT and Channelled Spectrum and by confocalmethods.

Although CCD based low-coherence interferometry should, in principle, fall intothe category of volume imaging, in practice it has not been successfully applied inthis field. Yet, it offers a number of advantages over other such methods. Like OCT,CCD based low-coherence interferometry is able to measure absolute distances with anaperture independent depth resolution at very high precision. In addition, the methodoffers superior performance in terms of speed due to the parallel nature of the imagingprocess and allows a more simple and robust construction of the apparatus due to lackof mechanical scanning elements. This thesis attempts in a large part to explore thepossibility of applying CCD based low-coherence systems to the investigation of volumestructures in order to exploit these advantages.

Page 33: 3-D Imaging using Optical Coherence Radar

Chapter 2

Surface Topography using

Coherence Radar

2.1 Introduction

Surface topography or profile measurements can yield useful information about thequality of fabrication processes [3] and are extensively used for the inspection of optical,automotive and electronic components. These include hard disk substrates, magneticheads, precision machined and polished surfaces such as gears, bearings, cylinder walls,fuel injector seals, flat and spherical optical components, and etched surface textures onsemiconductor wafers [63]. Profiling has also found considerable application in materialsciences research for the study of fractured surfaces, integrated circuits, dielectric films,fibre-reinforced composites, power cable insulation, minerals, soils and optical fibres[14]. Other applications include the study of machine tool wear [8] verification of surfacescattering theories [3] and quality monitoring of aspheric lens and mirror surfaces [25].

High resolution measurements of rough surfaces are best obtained using low-coherencemethods since these do not suffer the phase ambiguity of conventional phase-measurementinterferometry (see also section 1.5.1). In this chapter we present the results of surfacemeasurement and analysis performed using Coherence Radar [29, 33], a CCD based low-coherence technique which allows the measurement of rough surfaces topographies withaccuracies of 1− 2µm over a virtually unlimited depth range.

The principles of Coherence Radar are introduced in section 2.2 and a detailed de-scription of its experimental implementation is given in section 2.3. In section 2.5 wedemonstrate the capability of the system by measuring a 5 pence coin. In Sections 2.6and 2.7 we present and evaluate a new thresholding technique which prevents the for-mation of rogue data points. In section 2.8, the study of hypervelocity impact cratersis investigated and results of this interesting and new application are presented. Sec-tions 2.9-2.11 conclude the chapter with an analysis of the various noise sources in thesystem and their effect on the measurement accuracy.

2.2 Principles of Coherence Radar

In this section, the principles of Coherence Radar are presented. We introduce thefundamentals of low-coherence interferometry and discuss the data processing techniquesinvolved in constructing a three dimensional surface topography.

20

Page 34: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 21

Low-coherencesource

S = 830 nm

Fibre source

Translation stage

Translation stage

Sample

MirrorLens 1

Lens 2

ND-filter

Object andcoherence plane

Image plane

Aperturestop

f1

f1

f2

f2

Collimating lens

fc

BS

Computer

Frame Grabber

PZT

CCD camera

Figure 2.1: Coherence Radar experimental arrangement

The Coherence Radar method [29, 33] is based on a low-coherence Michelson inter-ferometer and uses a CCD camera for the detection of interference patterns. A diagramof such an arrangement is given in figure 2.1.

A surface of interest is placed in one of the arms of the interferometer such that lightfrom the reference arm is superimposed with light reflected from the sample surface.A telescope images the surface and the superimposed reference wave onto the CCDsensor. Due to the low coherence of the source, the superimposed wavefronts interfereonly if the path lengths of the two arms are matched to within the coherence length ofthe source. If the sample surface is displaced along the optic axis the interference willchange depending on which part of the surface satisfies this condition.

The ability for absolute distance measurement arises from the use of a low-coherenceor broadband light source in the interferometer. The amplitude of the detected inter-ference is at a maximum if the optical path difference (OPD) is equal to zero. The basis

Page 35: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 22

of topography measurements is the detection of this condition which requires a measureof the interference amplitude. Coherence Radar uses a method called phase stepping todetermine this amplitude.

2.2.1 Phase Stepping

In order to detect the amplitude of the interference at a given object displacement,three CCD images are recorded while the reference mirror is displaced. In the presenceof interference there is an associated sinusoidal change in intensity with respect to thereference mirror position. An analysis of the resulting CCD images using the phasestepping algorithm then gives a measure of the interference amplitude at each pixel.

Let us consider the formation of interference in a Michelson with partially coher-ent illumination at central wavelength λ and Gaussian power spectrum of width ∆λ(FWHM). If the object and reference beam intensities are Io and Ir respectively, it canbe shown that the output intensity, I, is given by:

I(d) = Io + Ir + 2√

IoIrγ(d) cos

[

2d2π

λ+ φ

]

(2.1)

where d is the position of the object of interest along the optic axis and γ(d) is thecoherence function 1.

This may also be expressed as [29]:

I(d) = I +A(d) cos

[

2d2π

λ+ φ

]

(2.2)

where A(d) is the amplitude of the interference term as a function of the objectdisplacement, d. This amplitude is detected using the phase stepping algorithm [29]which requires three measurements of the intensity, I(d), such that a relative phaseshift of 2π/3 exists between each of them (valid for the mean wavelength λ). The shiftsare introduced by moving the reference mirror in steps of λ/6 along the optic axis. Sincea shift in the reference mirror position is equivalent to a shift in the object position, d,the measurements can be described by:

Ii = I(d+ iλ

6) i = 1, 2, 3. (2.3)

The interference amplitude can then be computed using:

A(d) =

[

∑3i=1

(

Ii − I)2]1/2

3/2(2.4)

where

I = 1/33∑

i=1

Ii (2.5)

1If Io = Ir, the coherence function γ(d) is equal to the fringe visibility, V, defined as V = Imax−Imin

Imax+Imin

,

where Imax and Imin are the maximum and minimum observed intensities respectively.

Page 36: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 23

Figure 2.2: The Coherence Radar experimental system

2.2.2 Surface Finding

Since Coherence Radar measures the interference via a two dimensional detector array(CCD), the interference amplitude, A(d), becomes a function not only of the objectposition, d, but also the pixel co-ordinates, x, y. The relative height of a surface elementconjugate to pixel x, y is then given by the object position, ds, at which the maximuminterference amplitude was measured. This is determined by a simple peak searchalgorithm and yields a surface topography.

2.3 Experimental System

Figure 2.1 shows a schematic diagram of the experimental arrangement used for ourimplementation of Coherence Radar. A photograph of the equipment employed canbe seen in figure 2.2. The Coherence Radar arrangement may conceptually be dividedinto three functional units: the interferometer, the imaging optics and the translationdevices.

As indicated in figure 2.2 the interferometer is composed of the low-coherence fibresource (1) and its collimation lens (2), the beam-splitter plate (3), the PZT mountedreference mirror (4) and the object of interest (5). A neutral density filter (10) is alsoincluded to attenuate the reference beam intensity.

The imaging optics consists of the telecentric telescope, and the CCD Camera (7).The telecentric telescope, in turn, is composed of two lenses (6a, 6c) and an aperture

Page 37: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 24

-40 -20 0 20 40

0.2

0.4

0.6

0.8

1.0

Intensity (normalised)

OPD (microns)

Figure 2.3: Coherence function of the super-luminescent diode at a bias current of139mA

stop (6b).Translation devices are used to displace the object during the measurement process

(8) and to allow an adjustment of the reference mirror (9).

2.3.1 Michelson Interferometer

The interferometric setup is based on a Michelson interferometer. Illumination of theobject is provided by a super-luminescent diode (SLD) which delivers light via a singlemode optical fibre in the near-infrared (N-IR) range. The source (1) is a high power low-coherence point source, which is preferable to discharge or filament lamps, because of itshigh spatial coherence, high power and yet low-temporal coherence (23µm at FWHM).At a maximum driving current of 140mA the source delivers up to 1mW of power ata mean wavelength of 830 nm (N-IR range). Figure 2.3 shows the temporal coherencefunction of the source SLD-361 (SUPERLUM Ltd.). Since the light is emitted fromthe single mode fibre end in only one fundamental mode, it acts as a spatially coherentpoint source and the light can be collimated by a single lens into a Gaussian beam.

Collimated light from the SLD is then incident via the beam-splitter on both theobject and reference mirror. This method of collimation was found to be ideal becauseit assures complete object illumination, even inside deep holes, without shadowing.

A non-polarising plate beam-splitter (3) with a 50/50 transmission/reflection ratio(at λ = 830nm ) is used to divide the wavefront. Although, dispersion is not compen-sated in this type of beam-splitter, the lack of air-glass interfaces at a normal to thetransmitted beam eliminates the strong ghost images sometimes observed in dispersion

Page 38: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 25

compensated cube beam splitters. An anti-reflection coating centred at a wavelength of830nm on the non-reflecting side of the plate, helps reduce unwanted reflections.

The reference mirror (4) is made from a front silvered glass plate to prevent doublereflections. Its flatness should be of the order of the maximum depth resolution attain-able. To simplify alignment, the mirror is mounted on a two-axis tip-tilt mount withmicrometer screw adjustment and on a translation stage (9).

Detection of the interference signal requires a small modulation of the reference path( ≈ λ/2 ), which is accomplished by the expansion of a PZT material on which the refer-ence mirror is mounted. A computer controlled high voltage amplifier is connected to thePZT material to control its expansion. In principle this allows continuous movementsover a range of 1µm with a resolution of 5nm. However, a non-reproducible hysteresisbehaviour of the material was observed which prevents the accurate calibration of thevoltage/expansion coefficient and thus limits the positional accuracy to 40nm.

Light in the reference arm is attenuated by a neutral density filter (10) to equal theintensity of the object beam. However, since the image intensity of the object is notnormally uniform, an equal object and reference intensity cannot be achieved at everypoint in the image.

2.3.2 Imaging Optics

The necessary integration of the imaging optics with the Michelson interferometer im-poses a number of constraints on the choice of lens arrangement. Firstly, a beam-splitterof suitable size needs to be accommodated between the imaging optics and the objectof interest. This imposes a practical limit to the object to objective distance, and thusprevents the use of high resolution/magnification optics. Secondly, the beam reflectedfrom the reference mirror must be superimposed with the object image. The opticstherefore needs to produce an image of the object without altering the divergence of thereference beam. When using a parallel collimated illuminating beam (as indicated infigure 2.1) a telescope is most suited, since this preserves the reference beam divergencefor any objective to reference mirror distance. It also has the added benefit of depthindependent magnification, which reduces the amount of shadowing when attemptingto image the bottom surface of deep holes.

The telescope consists of two lenses (6a, 6c) with focal lengths f = f1 and f = f2which are separated by f1 + f2. The optical magnification of the telescope is given byM = f2/f1. An adjustable aperture stop (6b) is placed in the focal plane of both lenses.The diameter of which controls the angle of accepted light rays (numerical aperture), asshown in figure 2.4. If the stop is aligned with the optic axis, only the reflections froma surface at 90◦ to the optic axis will be allowed to pass through it centre and reach thedetector (CCD Camera). It is therefore necessary to align the reference mirror normalto the optic axis, so as not to block the beam.

When placing a rough object in the interferometer, the stop diameter will deter-mine the maximum surface slope which can be imaged. The diameter also affects theresolution of the optical system. The diffraction limited resolution, R, is give by [64]:

R =0.61λ

sin(θ/2)(2.6)

where sin(θ/2) is the numerical aperture of the system and λ is the central wave-length of the illuminating light source. It is therefore generally beneficial to operate the

Page 39: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 26

Image plane

Aperture

Optic axis

Lens 1 Lens 2

q

Object of interest

D

f2f1 f2f1

Figure 2.4: Telecentric telescope

interferometer with a large stop aperture.Since the object plane of the telecentric telescope is fixed, the reference mirror

position along the optic axis is adjusted so that the coherence plane (at which OPD =0) coincides with the object plane. A CCD video camera in the image plane convertsthe incident light pattern into a video signal, which is digitised by a frame grabber forstorage and analysis by computer (see also appendix A).

2.3.3 Translation Devices

To allow the surface of interest to be measured, interference must be present betweenthe reference wave and the light reflected from the objects. Since the interference islocalised to reflections originating from areas of equal height (along the optic axis), theobject of interest has to be translated along the optic axis during the data acquisition tocover the range of interest. This is accomplished by mounting the object on a computercontrolled translation stage (8). This device has a 1µm resolution and a 20cm range.The position feedback signal from this stage is used to define the surface positions duringthe Coherence Radar measurement.

A further translation stage (9) is used for initial adjustments of the reference mirrorposition. In this way, the coherence plane (see figure 2.1) can be made to coincide withthe focal plane of the objective lens (lens 1 in figure 2.4). Periodic adjustments of thismay be necessary due to the varying optical path introduced by neutral density filtersof varying thickness.

2.4 Data Processing

The flowchart in figure 2.5 outlines the operations performed by the software duringa Coherence Radar measurement. This includes both the control of hardware (object

Page 40: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 27

translation stage and PZT actuator) as well as the computation required for the phasestepping and surface finding algorithms discussed in sections 2.2.1 and 2.2.2.

The phase stepping process computes the interference amplitude based on threephase shifted images. This method yields an interference amplitude matrix A(d, x, y),where d is the object translation stage position and x, y are the pixel coordinates ofthe CCD sensor. The amplitude at every pixel is compared with that measured at theprevious object position, d, in order to find the occurrence of a maximum. This surfacefinding process retains three storage arrays:

1. A(dj+1, x, y) - the interference amplitude at the current object position

2. Am(x, y) - the maximum amplitude encountered up to the last position dj

3. ds(x, y) - the object positions corresponding to the maximum amplitude arrayAm(x, y)

Once the object is translated along the z-axis from dj to a new position dj+1 thesurface finding algorithm performs the following operations:

• Values of the maximum amplitude up to position dj , are compared with the am-plitudes measured at the current position dj+1.

• If A(dj+1, x, y) > Am(x, y) , ds(x, y) is set equal to dj+1 and Am(x, y) is set equalto A(dj+1, x, y).

This process is repeated until the object has been translated through the range ofinterest. The resultant depth matrix, ds(x, y), then contains a surface height measurefor each surface element x,y. The array Am(x, y) may be stored to aid in removal ofunresolved surface points (see section 2.6).

2.5 Surface Profile Measurements

In an initial experiment designed to establish the correct behaviour of the system, a5 pence coin (figure 2.6) was measured. This is suitable due to its rough, reflectivesurface, overall size, and small depth range. A topography measurement was performedby translating the coin over a range of 400 µm in 1 µm steps. The maximum depthresolution is thus limited to 1µm. The resultant depth matrix, which contains surfacepositions at each pixel, is presented as a grey-scale image, where the image intensity isa measure of depth. This is shown in figure 2.7, where the image size is 512 by 512.

A profile of the coin surface (figure 2.8) clearly shows the height of the 5 as well asa number of rogue points. Because the scan range in this experiment was not sufficientto reach the surface on which the coin was mounted a large number of rogue pointsis observed to the right of figure 2.8 and at the corresponding position at the bottomof figure 2.7. To reduce the amount of rogue data in measurements a thresholdingtechnique was introduced. This is discussed in the next section.

Page 41: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 28

Start

Move translation stageto initial positon

Increment referencemirror position by1/6 of wavelength

Acquired threeimages?

Acquire image, I(x,y),fom CCD camera

No

Phase stepping algorithm

Surface finding

Yes

Covered depthrange?

Store depth matrix,ds(x,y)

Increment objecttranslation stage

to new position dj+1

End

Yes

No

Computer average of thethree images, I(x,y)

Compute interferenceamplitude, A(dj+1,x,y)

Is the amplitude, A(dj+1,x,y),larger than the maximum

amplitude, Am (x,y),encountered up to object

position dj ?

Am (x,y) = A(dj+1,x,y)

depth matrix,ds(x,y) = dj+1

continue

Yes

No

Figure 2.5: Flow chart of data acquisition and hardware control

Page 42: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 29

Figure 2.6: 5 pence coin (the rulings shown are 0.5mm)

0

50

100

150

200

250

300

350

400

100 200 300 400 500

50

100

150

200

250

300

350

400

450

500

Figure 2.7: Surface topography of a 5 pence coin; depth is indicated by colour (scale inµm)

Page 43: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 30

0 100 200 300 400 500 6000

50

100

150

200

250

300

350

400

position (pixel no.)

dept

h (m

icro

ns)

Figure 2.8: Profile cross-section at position indicated by dashed line in figure 2.7

2.6 Noise Thresholding and Surface Interpolation

To assure a reliable topography measurement a noise thresholding algorithm was de-veloped. This determines if the amplitude of the interference is sufficient to allow anaccurate measurement. A threshold value proportional to the residual noise is computedfor each pixel. If the maximum interference amplitude detected during a topographymeasurement is smaller than this threshold value no surface position is stored.

During the Coherence Radar measurement process the surface finding algorithmsearches for the object position at which the interference amplitude reaches a maximum.If no interference is present or the CCD detector is either under or over exposed amaximum will not occur and a surface position cannot be found. However, in thepresence of noise, a maximum will occur at a random position and will produce anincorrect measurement (rogue point). To make reliable surface measurements, it isimportant to identify and remove these rogue points. Accordingly, an addition was madeto the surface finding algorithm, incorporating a noise threshold. A similar method hasalso been reported by [65].

The interference amplitude noise threshold value for the pixel position x, y, is de-termined by positioning the object so that no part of its surface returns light coherentwith the reference. In the absence of any interference the resultant interference ampli-tude must therefore consist entirely of noise. This is repeated several times to obtain areliable measure of the mean and standard deviation of the noise. The mean, A(x, y),and standard deviation,σ, of the amplitude noise samples is determined for each pixel

individually and the threshold value, Ath(x, y), is computed as√

A(x, y) + nσσ, where

Page 44: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 31

nσ is determined empirically to give the best noise rejection (see also section 2.7).Once measurement of the surface topography is completed, this threshold value is

compared to the maximum detected interference amplitude, Amax(x, y). Any surfaceelement in the depth matrix, ds(x, y), is removed if the underlying amplitude,Amax(x, y),is smaller than the threshold value, Ath(x, y).

To allow unambiguous identification of the rogue points, rogue values in the depthmatrix, ds(x, y), are set to a unique value reserved for that purpose. However, sincethe final depth matrix should contain a realistic surface measure at every pixel position,iterative mean filtering is applied. This interpolates the missing points (which have beenset to the unique value) by replacing them with the average of neighbouring points. Sincethe nearest neighbours may also be missing, the process is iterated until all points areinterpolated.

2.7 Evaluation of Noise Thresholding

In this section we present result obtained with the Coherence Radar system by incor-poration of the thresholding approach described in the previous section. To this end,its performance is assessed by measuring the surface of an object which would typicallyproduce a large proportion of rogue point.

In order to test the thresholding, an object possessing a large reflectivity range hadto be found. To create a surface with these properties, a steel ball bearing was forcedinto an aluminium slab at high pressure to form a hemispherical crater (diameter ≈8 mm). The metal sample was then partly polished to create a rough, but mostlyspecular reflective surface. The reflections from this surface create an image with alarge range of intensities, since the steep surface gradients of the crater ’walls’ returnonly a small amount of diffuse light while the surfaces normal to the optic axis reflect alarge proportion of the incident illumination.

A measurement of the crater shape was made by translating the object over a rangeof 5.2mm in 1µm steps. The CCD exposure time was adjusted to yield a suitable signalfrom the largest possible number of pixels. The remaining pixels were either saturatedor under-exposed, preventing the detection of a sufficient interference signal in someregions.

In this experiment, the threshold value was computed from 100 samples prior to theacquisition process. The value nσ was empirically determined to yield an acceptablebalance between missing and rogue points. The surface topography in figure 2.9 showsa series of rogues which were removed by this method and replaced with a unique depthvalue (0), shown as black.

The existence of remaining rogue data points can be seen in the surface cross sectionpresented in figure 2.10 (the transverse position of this profile is indicated by the dashedline in figure 2.9). The remaining rogue data points can be attributed to the way inwhich the threshold value is measured: Since the object has to be positioned so thatno interference is present, it has to be placed outside the coherence plane (and thusthe focal plane) of the interferometer (figure 2.1). During the surface measurementhowever, the interfering surface areas always coincide with the focal plane, i.e. theimage is out of focus during the noise measurements but in optimal focus during thesurface measurement process. A solution to this may be to eliminate interference byblocking the reference arm, rather than displacing the object.

Page 45: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 32

0

1000

2000

3000

4000

5000

50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500

Figure 2.9: Surface of hemispherical crater, depth indicated by colour (microns)

0 100 200 300 400 500 600500

1000

1500

2000

2500

3000

3500

4000

4500

5000

position (pixel no.)

dept

h (m

icro

ns)

Figure 2.10: Surface profile of hemispherical crater after thresholding. The central spikeis a remaining rogue point.

Page 46: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 33

0

1000

2000

3000

4000

5000

50 100 150 200 250 300 350 400 450 500

50

100

150

200

250

300

350

400

450

500

Figure 2.11: Surface with missing points interpolated

The data shown in figure 2.9 was subsequently interpolated using the iterative meanfilter. A subjective impression of the techniques effectiveness can be gained from fig-ure 2.11, which shows the data after interpolation. Parts of the image show a granularappearance which do not correspond to any real surface features. These correspond toareas where little light was returned from the steep walls of the crater and a break downof the method was to be expected. Areas closer to the centre have been interpolatedwell and are in good agreement with the continuous semi-circular surface of the object.We have found that any remaining rogue points can be removed successfully by medianfiltering the data.

2.8 Analysis of Hypervelocity Impact Craters

In this section we present a new and interesting application of Coherence Radar, thestudy and analysis of hypervelocity impact craters. The physical and chemical prop-erties of natural dust, meteoroids and artificial ’debris’ in the space environment is ofconsiderable interest to space science research. A better understanding of particle fluxand composition allows the prediction of impact frequency and hazard to space missionsand ultimately aids in the development of components suitable for space missions. Postflight analysis of spacecraft helps to decode the origins of impactors by allowing thestudy the chemical signatures, impact flux and impact site morphology. This requiresin part the calibration of impact signatures using laboratory generated craters [66].

Missions such as LDEF (Long Duration Exposure Facility), HST (Hubble Space

Page 47: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 34

Telescope) and EURECA (European Retrievable Carrier), have exposed various mate-rials to the space environment. The recovered materials constitute an immense archiveof impact data. Many studies of this and other data have been published (see for ex-ample the three Post-Retrieval Symposia Proceedings for LDEF [67–69]) and work isongoing in many laboratories.

Most studies of crater shapes have concentrated on simple measures such as maxi-mum depth, central depth, mean diameter at the ambient plane, circularity index etc.,together with qualitative accounts of other features such as lip characteristics or devia-tions from axial symmetry, e.g. up-range/down-range asymmetry [70].

Only a small fraction of the information contained in the shapes of craters has beenextracted and used. If all the information contained in the crater shapes were available,it might prove possible to improve significantly the usefulness of crater morphology forinferring properties of impactors.

Topography measurements made by Coherence Radar can supply invaluable infor-mation on the deformation caused at impact. The ability to image large areas of interestwith high resolution makes this a particularly useful tool in the investigation of cratersgenerated in the laboratory.

However, due to the large amount of raw data made available by Coherence Radarmeasurements, a meaningful quantitative comparison of crater morphology becomesdifficult. In an effort to reduce this data to a few parameters describing the overallimpact shape, we have tried to approximate the surface by use of Zernike polynomials.Since this allows the representation of surface shapes in parametric form it makes itpossible to compress all the numerical data to a smaller set of coefficients.

Zernike polynomials were primarily chosen because they have a geometry whichseems highly suited to the description of typical impact craters. It is expected, therefore,that a good approximation to typical crater shapes may be made with a relatively smallnumber of terms allowing a convenient parametric description of crater features whichmay thus enable meaningful categorisation and distinction between different types ofimpact events.

Surface Measurements

In our experiments, four impact craters were generated in the laboratory using the Uni-versity of Kent’s light-gas gun facility. Four identical impactors consisting of sphericalsteel ball-bearings of radius 1 mm were fired onto a target consisting of a flat plate ofaluminium alloy. The impact velocity of all four impactors was estimated to be between4.7 − 4.9 kms−1. The first two craters (craters 1 and 2) were head-on impacts (angle0◦ with respect to the normal) and craters 3 and 4 were formed by inclining the targetplate such that its normal made an angle of 70◦ with respect to the trajectory of theimpactor. A photograph of crater 2 is shown in figure 2.15.

The overall lateral dimension of the craters exceeded that of previously examinedobjects and required a new telescope with lower magnification. The reduced opticalmagnification invariably lead to a reduction in transverse image resolution. This, how-ever, corresponds well with the less stringent resolution requirements of this application.

In order to aid a meaningful interpretation of the data, the transverse scale of thedepth matrix was calibrated. The object size corresponding to one pixel or element inthe depth matrix was evaluated (18.7µm ± 0.6) by using a standard USAF resolutionchart.

Page 48: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 35

Figure 2.12: Three dimensional representation of crater 1 (head-on impact)

transverse position (mm)

tran

sver

se p

ositi

on (

mm

)

1 2 3 4 5 6

1

2

3

4

5

63500

4000

4500

5000

5500

6000

6500

7000

7500

8000

Figure 2.13: Surface topography of crater 1 (head-on impact)

Page 49: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 36

0 1 2 3 4 5 6 73000

3500

4000

4500

5000

5500

6000

6500

7000

7500

8000

transverse position (mm)

obje

ct d

ispl

acem

ent (

µm)

Figure 2.14: Cross section showing surface profile of crater 1 (position indicated infigure 2.13

Figure 2.15: Photograph of crater 2 resulting from a head-on impact (the rulings shownare 0.5mm)

Page 50: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 37

transverse position (mm)

tran

sver

se p

ositi

on (

mm

)

1 2 3 4 5 6

1

2

3

4

5

6

3000

3500

4000

4500

5000

5500

6000

6500

7000

7500

8000

Figure 2.16: Surface topography of crater 2 (head-on impact) - compare to photographin figure 2.15

transverse position (mm)

tran

sver

se p

ositi

on (

mm

)

1 2 3 4 5 6

1

2

3

4

5

6

4500

5000

5500

6000

6500

7000

7500

Figure 2.17: Surface topography of crater 3 (impact 70◦ to normal)

Page 51: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 38

transverse position (mm)

tran

sver

se p

ositi

on (

mm

)

1 2 3 4 5 6

1

2

3

4

5

6 3500

4000

4500

5000

5500

Figure 2.18: Surface topography of crater 4 (impact 70◦ to normal)

The topography of each crater was measured over a depth range of 10mm using 3µmsteps. In order to reduce noise, each of the 3 images required for the phase steppingprocess was averaged over 20 measurements. The processing time required to averagethese images and compute the interference amplitude at each object position (using a512 by 512 image) was approximately 7 seconds. The acquisition of one crater requiredprocessing of 3333 × 3 × 20 images (3 images averaged by 20 for each interferenceamplitude matrix) giving a total of 7 hours.

The surface data was interpolated after thresholding (see also section 2.6) and me-dian filtered to remove any remaining rogue points. Figures 2.13, 2.16, 2.17 and 2.18show a 6.73 mm by 6.73 mm area of craters 1 to 4 respectively. The data is presentedas a colour coded (scale in µm) representation of the surface height. For comparison,the surface topography of crater 1 (figure 2.13) is also represented by a three dimen-sional plot (figure 2.12). A cross-sectional profile of crater 1 (figure 2.14) offers yetanother representation of the data and shows the steep walls created by the impact.A photograph of crater 2 confirms the similarity between the original crater and theexperimental data.

The Zernike Circular Polynomials

The Zernike polynomials have found considerable application in optics, notably fordescribing the aberrations of imaging systems and in the statistical analysis of theaberrations produced by turbulence in the earth’s atmosphere [71].

The Zernike circular polynomials are a complete set of two-dimensional functionsdefined on and orthogonal over the unit radius circle [64]. They are defined by

Page 52: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 39

Zj(r, θ) =√

2(n+ 1) Rmn (r) cos(mθ) for j even:

Zj(r, θ) =√

2(n + 1) Rmn (r) sin(mθ) for j odd:

where

Rmn (r) =

n−m2∑

s=0

(−1)s(n− s)!

s!(n+m2 − s)!(n−m

2 − s)!rn−2s (2.7)

The values of n and m are integral and must satisfy the following relations

m ≤ n n− | m |= even

Use of the index j permits a convenient mode ordering in terms of the radial ordern and the azimuthal order m - for a given value of n, modes with a lower value of m areby convention ordered first. The orthogonality of the Zernike functions is expressed by

∫ ∫

dxdyZj(x, y)W (x, y)Zk(x, y) = δjk (2.8)

where the weight function is W (x, y) = 1π and the domain of integration is the unit

radius disk (x2 + y2 ≤ 1).Accordingly, we designate a circle enclosing the region of interest and approximate

the depth function of the crater ds(x, y) by the Zernike expansion up to the nth term

ds(x, y) =N∑

j=1

ajZj(x, y) (2.9)

The expansion coefficients are then generated easily by use of the orthogonalityrelation of eq. 2.8 as

aj =

∫ ∫

ds(x, y)W (x, y)Zj(x, y)dxdy (2.10)

Zernike Decomposition of Laboratory Craters

The Zernike decomposition of the craters 1-4 was calculated through use of eq. 2.10. Fig-ures 2.19 to 2.22 show a three dimensional representation corresponding to the Zernikefit (N =150 modes) of craters 1-4 respectively as well as the individual contributionfrom radially symmetric and azimuthally dependent terms. In general the Zernike fitsare excellent approximations. The normalised mean-squared deviation2 of all cratersis just under 0.05. The only significant discrepancy between the original data and theZernike approximations occur where the fit does not quite achieve the very steep wallsof the craters. The coefficients of the different Zernike modes can give an indication ofthe basic shapes, as well as of the higher order properties. It was found for example (aswould reasonably be expected) that a contribution from azimuthally dependent modeswas higher in the fits of the craters produced by oblique impacts (70◦ to normal) than

2The normalised mean-squared deviation is the mean-squared deviation between the fit and the datadivided by the square deviation from the zero baseline.

Page 53: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 40

Figure 2.19: Zernike fit of crater 1

Figure 2.20: Zernike fit of crater 2

Page 54: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 41

Figure 2.21: Zernike fit of crater 3

Figure 2.22: Zernike fit of crater 4

Page 55: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 42

in the ones produces by head-on impacts. Thus, we expect, that the analysis in termsof Zernike polynomials will provide opportunities for fertile comparative studies.

The natural matching between Zernike modes and typical crater features and theinherently large amount of information carried in the coefficients suggest that they mayprovide a powerful tool for analysis of crater morphology. This work has led to thepublication of two semi-quantitative approaches to analysing the Zernike representationof crater morphologies [72, 73]. Results are very encouraging and it is anticipated thatsignificant further research effort will now be undertaken in this area [74].

2.9 Noise

In order to gain an understanding of the limiting factors determining accuracy and reso-lution of surface measurements, we attempt to quantify the amount of noise introducedby components of the Coherence Radar system in this section. Specifically, we willexamine the effect on the accuracy of interference amplitude measurements of:-

1. Systematic errors produced by phase stepping

2. Inaccurate displacement of the reference mirror due to PZT hysteresis

3. Mechanical vibrations which can induce random path fluctuations

4. Electronic noise present in the digital imaging system.

In each case observed values are used as the basis for a theoretical evaluation ofinterference amplitude errors. Section 2.10 then examines how these errors influencethe accuracy of topography measurements by use of a theoretical model estimatingthe behaviour of a peak search. Finally, section 2.11 compares these predictions withempirical measurements of topography accuracy.

2.9.1 Phase Stepping Error

Phase stepping (section 2.2.1 on page 22) yields exact results only if the amplitude

of interference remains constant for all three intensity samples, Ii = I(d + i λ6 ), wherei = 1, 2, 3. Since the amplitude is not constant in practice, interference measurementswill contain systematic errors. We will now derive this error numerically.

Due to the properties of the source, the interference amplitude may be approximatedby a Gaussian function of the form

A(d) = Am exp

(

− d2

2σ2

)

, (2.11)

where d is the object or reference mirror position, Am is the maximum interferenceamplitude and σ is the standard deviation3. An estimate of the systematic interferenceamplitude error was computed by applying the phase stepping algorithm to a simulatedinterference pattern (figure 2.23) of the form:

3A value of σ = 5.6µm was determined experimentally by fitting a Gaussian function to 10 interfer-ence amplitude profiles.

Page 56: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 43

Figure 2.23: Numerical simulation of low-coherence interferogram

Figure 2.24: Error in demodulating Gaussian interference amplitude

Page 57: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 44

I(d) = B +A(d) cos

(

4πd

λ

)

(2.12)

where B is the background intensity (B ≥ Am) and λ = 830nm. The interferenceamplitude, Aps(d), is then computed using the phase stepping algorithm (as shown insection 2.2.1 on page 22) such that

Ii = I(d+ iλ

6) i = 1, 2, 3. (2.13)

The amplitude error, Ea, is then given by Aps −A. A plot of the normalised ampli-tude error, Ea/Am, versus d is shown in figure 2.24. It can be see that only a relativelysmall error of ±0.9% results from phase stepping, such that its influence on the finalresult will generally be negligible. A comparison of figure 2.23 and 2.24 indicates acorrelation between the slope of the interference amplitude, A(d), and the magnitudeof this error.

2.9.2 PZT Hysteresis

The phase stepping accuracy can also be adversely affected by positional errors of thePZT mounted reference mirror. The position error incurred by the PZT transducer overa range of 500nm was observed to be as large as 40nm due to hysteresis. At a sourcewavelength of λ = 830 nm this corresponds to a phase error of ≈ π/5 radians.

In order to simulate the displacement error of the reference mirror, we attemptedto model a worst-case hysteresis behaviour. Figure 2.25 shows how the assumed linearbehaviour and the actual hysteresis behaviour may be approximated by linear functionsrelating voltage and PZT expansion. The voltage applied to the PZT material during anexperimental measurement is derived assuming a linear behaviour. However, the actualdisplacement is related to an unknown hysteresis behaviour. The displacement error, E,is then the difference between the actual displacement and the assumed displacement.As shown in figure 2.25, E, is proportional to the applied voltage, V , and thus also tothe assumed displacement, d (since d ∝ V ). We have thus chosen to model E as a linearfunction of d such that E(d) = E0d, where E0 is the position error per displacement.

An estimate of the interference amplitude error, Ea, resulting from measurementsmade with a displacement error in the phase shifting device was computed by applyingthe phase stepping algorithm to a simulated interference pattern of the form:

I(d) = B +A cos

[

4π(d+ E(d))

λ

]

(2.14)

where A is the amplitude of the interference signal (assumed to be constant), Bis the background intensity (B ≥ A) and λ = 830nm. The interference amplitude,Aps(d), is then recovered using the phase stepping algorithm (section 2.2.1). Using theempirically determined value E0 = 40nm/500nm = 0.08, the normalised amplitudeerror Ea/A is evaluated for a number of reference mirror displacements, d, as show infigure 2.26. From this it can be seen that the maximum error is as large as 10% andthus may have a significant effect on the final measurement.

Page 58: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 45

l/3

2l/3

l/6

Voltage

actu

al b

ehav

iour

E(l/3)

E(l/6)

E(2l/3)

2l/3+E(2l/3)

assumed displacement

actual displacement

Dis

pla

cem

ent,

d

V1

V2

V3

assumed lin

ear behaviour

hysteresis

Figure 2.25: Hysteresis of the PZT material

Page 59: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 46

Figure 2.26: Amplitude error as a result of PZT hysteresis

Figure 2.27: Numerical simulation of interference in the presence of mechanical vibra-tions

Page 60: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 47

Figure 2.28: Distribution of amplitude error

2.9.3 Vibrational Noise

The entire optical system was supported on a metal honeycomb optical breadboard.It was observed that vibrations transmitted to the apparatus from the floor inducedOPD changes of up to 200nm in the interferometer. The associated interference ampli-tude error resulting from these vibrations was computed numerically by applying thephase stepping algorithm to a simulated interferogram with varying degrees of addedvibrational noise. The interference is given by

I(d) = B +A cos

[

4π(d+ nv)

λ

]

(2.15)

where d is the object displacement, A is the amplitude of the interference, B is thebackground intensity (B ≥ A), and nv is a random uniformly distributed path lengthvariation (±0.1µm). A plot of I(d) versus d, showing the simulated interference signal,can be seen in figure 2.27. The interference amplitude, Aps(d), is then computed usingthe phase stepping algorithm such that

Ii = I(d+ iλ

6) i = 1, 2, 3. (2.16)

The distribution of the normalised interference amplitude error, Ea/A, where Ea

is Aps − A as before, is shown in figure 2.28 and has a standard deviation of σ =0.2. This shows that substantial interference amplitude fluctuation of up to 20% canresult from mechanical vibrations. Use of vibration isolators may help reduce this effectconsiderably.

2.9.4 Image Noise

When considering image noise one has to distinguish between fixed pattern noise (vari-ations from pixel to pixel in the same frame) and frame to frame noise (variations fromframe to frame at the same pixel). Fortunately, since the data processing algorithm

Page 61: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 48

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.40.0

0.2

0.4

0.6

0.8

ampl

itud

e er

ror

( σ A

)

CCD noise (σccd)

Figure 2.29: Relationship between image noise (σccd) and the resultant amplitude error(σA)

operates on each pixel independently, fixed pattern noise (sensitivity variation frompixel to pixel) does not effect the accuracy. Interference amplitude measurements are,however, affected by frame to frame noise which is introduced at a number of differentstages, such as CCD readout, electronic amplification and digitisation.

An estimate of the relationship between image noise and the associated interferenceamplitude error was derived numerically by recovering the amplitude of a number ofsimulated interferograms with varying degrees of added noise. A simulated interferencepattern was computed using the following relationship (analogous to equation 2.15):

I(d) = B +A

[

cos

(

4πd

λ

)

+ nccd

]

(2.17)

where the image noise, nccd, was generated randomly with a Gaussian distributionof standard deviation σccd. As in the previous sections the interference amplitude, Aps,was recovered by applying the phase stepping algorithm. The resulting normalisederror (EA/A, where Ea = Aps − A) and its associated standard deviation (σA) werethen computed for a number of different σccd. A plot of the image noise (σccd) versusthe resulting interference amplitude noise (σA) can be seen in figure 2.29 and shows analmost linear relationship between the two.

We can now use this relationship to deduce the interference amplitude error for aknown amount of image noise. First, however, it is necessary to arrive at a realisticvalue of σccd. Since σccd is a dimensionless number it not only depends on the noiseproduced in the imaging system (Aσccd), but also on the strength of the interferencesignal (A) present during the measurement (see equation 2.17).

Page 62: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 49

First a worst-case estimate is made. We assumed that the smallest interferenceamplitude (A) required to allow a valid surface measurement is equal to twice theactual image noise (Aσccd). Thus, in this case we require no actual measurement ofimage noise, resulting in σccd = 0.5 and a corresponding value of σA = 0.4.

A best-case estimate requires a measure of the actual image noise (the standarddeviation of this was measured to be 4.5 4, see also appendix A) such that the bestratio between noise and interference signal may be evaluated. Since the amplitudeof the interference signal can be no larger than approximately 125 (since the imagingsystem has an 8-bit digital intensity range of 0 to 255) the smallest possible value ofσccd (4.5/125) yields σA ≈ 0.04 (by extrapolating the relationship in figure 2.29), anorder of magnitude less than the worst case.

We have determined that the noise in the imaging system results in an interferenceamplitude error of up to 40%. This is twice as much as that produced by mechanicalvibrations (20%) and four times as much as caused by PZT hysteresis (10%). However,depending on the strength of the interference signal this may be substantially smaller,as shown above. Nevertheless the prevention of this kind of error should be of primaryconcern when designing Coherence Radar systems.

One way to reduce the image noise is to average the intensity measurements overN images before phase stepping. The surface profiling software developed for our im-plementation of Coherence Radar allows this type of averaging, and can thus reducethe noise by a factor of

√N (this also reduces noise induced by mechanical vibrations).

However, we have found that this is achieved at the cost of a diminished signal. If themechanical vibrations induce a phase shift of φ ≥ 2π, the signal may be lost, since inthis case an average of I = B + A sin(4πdλ + φ) over many measurements is equal toI = B.

In the following section we will discuss how the errors computed in this section affectthe accuracy of the final surface topography measurement.

2.10 Accuracy of Surface Location

Although the surface finding algorithm (a peak search to locate the centre of the co-herence function) has no intrinsic sources of error, it is however sensitive to the noisepresent in the interference amplitude measurements.

In this section we establish the relationship between the noise in the interferenceamplitude measurements and the accuracy of the peak location. When a peak search isperformed the maximum accuracy achievable is related to the shape of the interferenceprofile (which approximates to a Gaussian in our case) and the uncertainty of themeasurement.

Let us assume a Gaussian interference amplitude profile of the form

A(d) = Am exp

[

−(d− ds)2

2σ2

]

, (2.18)

where d is the object position, Am is the peak amplitude and σ is the standard devi-ation defined by the coherence length of the source. If, as in figure 2.30, the maximuminterference amplitude occurs at the object position d = ds, and given an interference

4this is an 8-bit digital number (in the range of 0-255)

Page 63: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 50

FX DrawUnregistered

Evaluation Copy

Ea

A(ds+Ed)

A(ds)

s

ds ds+Ed

Object displacement, d

Ed

Inte

rfer

ence

am

pli

tude,

A(d

)

Figure 2.30: Peak search: relationship between amplitude error (Ea) and position error(Ed)

Source Section Interf. amp. error (Ea/A) Position error (µm)

Phase Stepping inherent 2.9.1 ± 0.009 ± 0.32

PZT hysteresis 2.9.2 ± 0.110 ± 1.14

Mechanical vibrations 2.9.3 ± 0.2 ± 1.58

Image noise 2.9.4 ± 0.05 - 0.4 ± 0.76-2.39

Table 2.1: Values of approximate positional error based on interference amplitude error

amplitude error, Ea, the largest resulting position error may be approximated by thedistance Ed such that A(ds) = A(ds+Ed)+Ea. Since A(ds) = Am, Ed can be evaluatedby solving

1− exp

(

−(Ed)2

2σ2

)

=Ea

Am(2.19)

for Ed

Ed =

−2σ ln

[

1− Ea

Am

]

where EaAm

< 1 (2.20)

Using the previously determined value of σ = 5.6µm (based on measurements of theinterference profile using our source), the positional accuracy of the peak finding processcan be computed given the error values determined in the last section. Table 2.1 sum-marises all the error sources discussed in the last section and presents the correspondingsurface finding error, Ed, determined using equation 2.20.

Page 64: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 51

Errors due to phase stepping are smaller than the smallest step size used in displac-ing the object over the range of interest and thus may be neglected. PZT hysteresis,mechanical vibrations and image noise, however, produce sufficiently large errors (up to±2.5µm) to be be considered limiting factors in the overall measurement accuracy.

Superior depth resolution could be achieved by:

• Decreasing mechanical vibration by introducing isolators.

• Reducing image noise.

• Using translation stages with higher accuracy and resolution.

• Replacing the peak finding method with algorithms already presented for Linnikand Mirau type interferometers, such as centroid finding [75] and frequency domainanalysis [62].

To complete the discussion of accuracy, another more fundamental source of errorshould be considered. Due to the size of a pixel on the CCD sensor, a finite area of theobject surface is imaged by each pixel i.e. there is a limited transverse resolution. Duringthe measurement procedure a single surface height is assigned to each pixel location.Therefore, if the surface of interest is rough the depth resolution will be limited tothe range of surface positions inside that area. However, even if the size of the pixelsis decreased sufficiently, the transverse resolution is still limited by abberations anddiffraction. In the case of coherent superposition the finite resolution also gives rise tospeckle, which limits the depth resolution to the surface roughness [29].

2.11 Empirical Evaluation of Accuracy

In order to assess the overall accuracy of our Coherence Radar system, the surfaceposition of a flat mirror was measured along a row of pixels and the deviation of thisprofile from a straight line was determined. This data was acquired by scanning aninclined mirror over a range of 64 µm at 1 µm steps. No averaging was performed.

In figure 2.31 the interference amplitude is represented by a gray scale image showingits variation along a row of 512 pixels and over a range of 60 object positions (depth).The result of the amplitude peak search is indicated by a black line and shows theposition of the experimentally determined profile. Figure 2.32 shows a plot of the lineof best fit through these surface positions. The residuals indicate an RMS deviationfrom the straight line of best fit by 1.1µm. Because the test surface (mirror) is notoptically rough, the resolution is not limited by the surface roughness and the RMSvalue can be compared to the figures in table 2.1, showing good agreement.

2.12 Conclusion

In this chapter we have presented the principles of Coherence Radar, described ourexperimental system and have presented results of various surface measurements. Wehave demonstrated its ability to measure the topography of a rough reflective metalsurface and have developed a method to filter noise using thresholding. Results ofa new and promising application were presented: The measurement of hypervelocityimpact craters and their subsequent analysis using Zernike polynomials. This involved

Page 65: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 52

position (pixel no.)

dept

h (m

icro

ns)

50 100 150 200 250 300 350 400 450 500

10

20

30

40

50

60

Figure 2.31: Interference amplitude vs. depth along a line of 512 pixels, showing mea-sured surface position

0 100 200 300 400 500 60020

25

30

35Line of best fit through data

position (pixel no.)

dept

h (m

icro

ns)

0 100 200 300 400 500 600−5

0

5residuals (RMS deviation = 1.1 microns)

position (pixel no.)

devi

atio

n (m

icro

ns)

Figure 2.32: RMS deviation of surface position from line of best fit

Page 66: 3-D Imaging using Optical Coherence Radar

CHAPTER 2. SURFACE TOPOGRAPHY USING COHERENCE RADAR 53

measuring surface topographies of objects as large as 9 by 9 mm, containing deep holesand steep walls over a depth range of at least 10 mm. Finally, the accuracy of CoherenceRadar was investigated theoretically and compared to empirically determined results.

In summary, Coherence Radar is ideally suited for surface measurements of largeobjects which require a high accuracy and large depth range. Potential applicationsinclude the inspection of manufactured parts containing milled slots, drill holes or cracks,comparative analysis of deformations and the study or documentation of biologicalspecimens.

Page 67: 3-D Imaging using Optical Coherence Radar

Chapter 3

Imaging of Multiple Reflecting

Layers

3.1 Introduction

Building on the work described in chapter 2 the Coherence Radar method is modifiedto allow tomographic and volume imaging. Scanning confocal microscopy [12, 14] andoptical coherence tomography (OCT) [31, 43] are capable of delivering tomographicimages of both inert and biological material (see also section 1.4.3 on page 8). Howeverthese methods suffer from drawbacks associated with mechanical beam scanning such asvibration, motor wobble[76], path length modulation[77], geometric distortions[78, 79]and slow speed. Coherence Radar overcomes many of these limitations by the use of adetector array (CCD) and thus potentially offers increased speed and stability.

Recently a number of new low-coherence methods have emerged which use a CCDarray for the detector of interference. These methods are capable of imaging multilayerstructures [48], human skin [80], and highly scattering tissue [81] but do not make useof the CCD as a fast two-dimensional imaging device. Rather the CCD is employed tomeasure the spectrum of reflected light [48, 80] and the radial distribution of scatteredlight [81]. Although Swanson [82] describes a technique similar to Coherence Radar,designed for the study of translucent materials, to our knowledge, no results have beenpublished to date which document its successful application.

Thus, we present for the first time, tomographic images of multilayer structures ob-tained using CCD based low-coherence interferometry without the need for mechanicaltransverse scanning [83].

Potential applications may include the investigation of opaque objects embeddedin a transparent medium and the location of interfaces between transparent media ofdifferent refractive index, such as fibre composite materials and ceramic structures. Inaddition, there is considerable interest in methods capable of imaging biological tissue,in particular structures in the human eye such as the cornea, lens and retina. Wenote here that most biological tissue is highly scattering and thus reduces both theintensity of the reflected light as well as the accuracy and resolution of measurements.We therefore defer discussion of such problems to chapter 4 and here concentrate onapplication which do not involve scattering media.

54

Page 68: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 55

Figure 3.1: Interfaces separated by ∆d = 11λ

3.2 Theoretical Considerations

Compared to the investigation of opaque object surfaces (chapter 2), the study oftranslucent materials with multiple reflecting layers makes more stringent demands onsignal detection and analysis. An attempt is thus made here to theoretically outlinesome of the factors affecting the measurement of multilayer structures and to gaugelimitations on performance.

A number of complications arise as a consequence of moving from surface investiga-tions to studies of translucent volumes.

Axial Resolution: When studying multilayer objects, more than one feature has tobe identified along the optic axis. Since several interference maxima can now be observedit is no longer sufficient to use a peak search to locate the position of interfaces.

Weak Interference Signal: Light reflected from a multilayer object contains a co-herent as well as an incoherent component. If the number of reflections originating fromoutside the coherence plane is large, returned light may contain only a small fraction ofcoherent light (useful signal) and a detector with a high dynamic range must be usedto measure the signal.

Object-Light Interactions: When imaging multi-layered objects using CoherenceRadar, light has to travel through the object medium. Physical effects such as delay,refraction, dispersion, scatter and birefringence may affect the measurement.

3.2.1 Resolving Multiple Layers

Requirements for a system which resolves multilayer structures are distinct from thoseof a profilometer where an accurate measurement of the surface location is of primaryconcern. Since there are now several, possibly closely spaced, features along the opticaxis, it is no longer be sufficient to use a peak search to locate these features. Ideally,

Page 69: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 56

Figure 3.2: Interfaces separated by ∆d = 11λ+ λ/8

Figure 3.3: Interfaces separated by ∆d = 11λ+ λ/4

Page 70: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 57

the final data should contain a value for the location of each reflecting interface togetherwith the intensity of the reflection. If only a single reflecting interface is present alongthe optic axis, a measure of the interference amplitude and a subsequent peak searchcan locate the position adequately. However, can a peak search still accurately locateseveral closely spaced reflections?

In order to answer this question, let us consider the intensity observed at the outputof a low-coherence Michelson interferometer when an object composed of two closelyspaced (≈ 9µm) reflecting interfaces is placed in one of the arms. Let the positionof this object along the optic axis be d (relative to the first interface) and let thespacing between the interfaces be ∆d. The interference is modelled by assuming a sinu-

soidal intensity variation modulated by a Gaussian coherence function, γ(d) =(

−d2

2σ2

)

where σ = 5.6µm, such that γ(d) corresponds to source coherence length of lc ≈ 25µm(FWHM). The output intensity, I(d), as a function of the object position is then givenby:

I(d) = 1 + V

{

exp

[

−d2

2σ2

]

cos

[

4πd

λ

]

+ exp

[

−(d−∆d)2

2σ2

]

cos

[

4π(d−∆d)

λ

]

}

(3.1)

where V is the visibility of the interference and λ = 0.830µm. Figures 3.1, 3.2and 3.3 show a plot of I(d) versus d for a number of interface separations, ∆d. Adashed line indicates γ(d) and γ(d+∆d) which corresponds to the interference envelopethat would be observed for each interface individually. It is interesting to note thatfringe beating produces a combined interferogram of unpredictable shape, dependingon values of ∆d. A shape equivalent to a simple addition of the interference envelopes(since the sinusoidal terms are in phase) can be observed in figure 3.1. If an additionalseparation of λ/4 is introduced, the resultant beating effect aids in the distinction ofthe two interferograms (figure 3.3). However, this also causes the envelope maxima tooccur at different positions (compare figure 3.2 and 3.3) even though the separation,∆d, remains essentially constant.

In conclusion, it can be said that the ability to differentiate between two reflectionsbecomes difficult, or impossible, if the separation between the interferograms is of theorder of the coherence length, lc. Even though the interferograms of individual reflec-tions may still be distinguishable in some cases due to beating effects, the maxima oftheir envelopes is not a reliable indication of the interface positions.

3.2.2 Simulation of Signal Strength from a Multilayer Object

In chapter 2 the interference of a rough object and a plane mirror were investigated.When the object surface coincides with the coherence plane, interference will occur,and very high visibility fringes can be measured (see equation 2.1 on page 22). Whenstudying translucent multilayer objects the situation is very different. Coherent light isreturned from interfaces within the coherence length of the source, light returned fromall other interfaces is incoherent. The result is a small amount of useful signal-carryinglight on a large background of incoherent light.

To quantify the amount of signal returned from a multilayer object, we consider asimple mathematical model of a translucent object composed of many identical glassplates, as shown in figure 3.4. The interference of n discrete reflecting and transmitting

Page 71: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 58

Figure 3.4: Model of multilayer object composed of many identical glass plates

Page 72: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 59

interfaces separated by a distance ∆d can be approximated by reflections from theglass-air boundaries in the object (figure 3.4).

We simplify the analysis by making the following assumptions:

• ∆d, is large compared to the coherence length of the source

• Light is not scattered or absorbed

• Light is incident as a parallel collimated beam normal to the surface of the glassplates

• No multiple reflections occur between boundaries

Consider placing such a stack of glass plates (containing n interfaces) in the objectarm of a Michelson interferometer illuminated by a low-coherence source of centralwavelength λ. It can be shown that the output intensity, I(d), as a function of theobject position, d , is then given by:

I(d) = It + Ir + 2√

Ir

n∑

j=1

Ic(j)γj(d) cos

[

2(d− dj)2π

λ

]

(3.2)

where dj is the object position which minimises the optical path difference (OPD)between interface j and the reference mirror, such that the Gaussian coherence functionγj(dj) = 1. Ic(j) is the intensity reflected from interface j, It is the total intensityreflected from the object and Ir is the intensity reflected from the reference mirror.

The intensity, Ic(j), returned from interface j is dependent on the Fresnel reflectivity(R) and transmissivity (T) of the glass boundaries and may be expressed as

Ic(j) = I0RT 2(j−1), (3.3)

where I0 is the intensity incident on the object.The intensity of light reflected from all n interfaces (It) is then given by

It(n) =n∑

j=1

Ic(j). (3.4)

Using equation 3.2 and assuming d = dj the amplitude of the interference term i.e.the interference amplitude (see equation 2.2 on page 22) may be expressed as a functionof j:

A(j) = 2√

Ic(j)Ir. (3.5)

Let us assume that in order to maximise the visibility the intensity reflected fromthe reference beam (Ir) is adjusted to equal the intensity reflected from the object (It).Using equations 3.3 and 3.4 in equation 3.5 we then obtain:

A(j, n) = 2I0R

n∑

i=1

T 2(i+j−2). (3.6)

Figure 3.5 shows a plot of A(j, n)/I0, versus j for a stack of 100 glass plates (n = 200).Using the Fresnel equation [84], and assuming T = 1 − R, the values of R and T wereevaluated for two cases:

Page 73: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 60

Figure 3.5: Interference amplitude versus interface number in a stack of 100 glass plates

1. The gaps between the glass plates are filled with air (assuming a refractive indexof air na = 1 this yields R = 0.043 (T = 0.957).

2. The gaps between the glass plates are filled with water (assuming a refractiveindex of water nw = 1.322 this yields R = 0.005 (T = 0.995).

Figure 3.5 shows that the decay of A with respect to j is much less rapid for water ascompared to air. Since A represents the strength of the signal recorded by the CoherenceRadar technique, this suggests that there is a limit to the number of boundaries whichmay be detected in such an object and that this limit is much lower at high boundaryreflectivities, R. In the next section we show that there is indeed such a limit and showhow it is determined by the dynamic range of the detector (CCD).

Dynamic Range

When imaging multilayer structures only the coherent part of the returned light is use-ful. When a large amount of incoherent light is present it is necessary that the detectorresolves the small coherent signal superimposed on a large incoherent background. Eventhough the incoherent light does not contribute any useful signal, it nevertheless con-tributes to the saturation of the detector. If the light is attenuated sufficiently to preventsaturation, the amplitude of the coherent signal may then be less than the noise floorof the detector. Thus, in practice, the detection of a small coherent signal may not bepossible since it is limited by the dynamic range of the detector. The dynamic rangemay be defined as the ratio between the saturation and noise level of a detector. Thedynamic range, R, required to detect interference so that the magnitude of the signal,Imax − Imin, is S times larger than the noise floor, is given by:

R =SImax

Imax − Imin(3.7)

Using equation 3.2 and assuming d = dj this becomes

Page 74: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 61

Figure 3.6: Dynamic range required to detect the interference signal from interface j ina stack of 100 glass slides (200 interfaces)

R(j) =S

2

[

Ir + It

2√

IrIc(j)+ 1

]

. (3.8)

With the help of the model developed in section 3.2.2 we can now derive the dynamicrange required to measure the positions of reflective interfaces in a stack of glass plates.Using equations 3.3 and 3.4 in equation 3.8 and again assuming that Ir = It we obtain

R(j) =S

2

[

n∑

i=1

T 2(i−j) + 1

]

. (3.9)

Values of R(j)/S are evaluated for n = 200 (a stack of 100 glass plates) and plottedas a function of j in figure 3.6.

From figure 3.6 it is evident that a typical CCD signal digitised to 8-bit precision(R = 256:1) does not offer sufficient dynamic range to allow position measurements

Page 75: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 62

of air-glass interfaces beyond j ≈ 110. Although these values are based on a simplemodel, we may conclude that for industrial applications, where objects are composed ofa few interfaces of high relative refractive index (such as say 10-50 glass-air boundaries),the system should be adequate, but when measuring complex structures consisting of alarge number of such interfaces an imaging system with a much higher dynamic rangeis required. This is not so for the case of glass-water boundaries. Here a low dynamicrange is sufficient to image a large number of reflective interfaces.

3.2.3 Effect of the Object Medium on the Measurement

Delay

Light travelling in a medium of absolute refractive index n > 1 experiences a time delayτ relative to a path in vacuum (or in air) of the same length, equal to τ = Lgn/c,where Lg is the geometric length of the path and c is the speed of light in vacuum.Thus, an optical path Lo measured using a low-coherence interferometer in this medium,corresponds to a geometrical distance given by:

Lg = L0/n. (3.10)

Position measurements performed using a low-coherence interferometer are only ac-curate if the refractive index profile is the same in both arms. In surface profile measure-ments this condition is satisfied since light in both arms travels in air. When observingtranslucent samples, however, light travels through the object medium and the refrac-tive index must be known in order correct for the delay. Since in practice objects consistof a number of materials with different refractive indices a correction becomes difficultin most cases.

Refraction

Refraction of light at the object medium boundary can alter the path of light enteringa multilayer structure. The most severe consequence of this, is a shift in the positionof the focus. Figure 3.7 illustrates this in the case of a plane boundary between thesurrounding air and the medium of the object. As shown, refraction at the objectboundary can be related to the numerical aperture of the imaging system (NA) byusing Snell’s law

n1 sin i = n2 sin r = NA (3.11)

where i and r are the angle of incidence and refraction respectively and n1 andn2 are the absolute refractive indices of the surrounding air and the object mediumrespectively. The position of the focal plane relative to the front of the object (z) isrelated to the shift in the focal plane relative to its position in air (δf ) by

z tan i = a = (z + δf ) tan r (3.12)

Using equation 3.11 and 3.12 we can then express1 δf as a function of z:

1The author acknowledges the help of George Dobre in deriving this equation.

Page 76: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 63

ari

n1 n2

lens

NA=n1sin(i)

optic axis

focal plane in n2

focal+coherenceplane in n1

object medium

z df

Figure 3.7: Focal plane shift caused by refractive object medium

δf (z) =

[

n22 −NA2

n21 −NA2

− 1

]

z (3.13)

There is also a shift in the position of the coherence plane δc(z), give by z(1/n2−1),which is opposite in direction to that experienced by the focal plane. Consequentlythe plane in which the coherent image is formed will be separated from the focus by ageometrical distance δt(z) = δf (z) − δc(z) equivalent to a path in vacuum of δt(z)/n2.It is possible to adjust the reference mirror position so that δt = 0, but since z changesduring the course of the measurement is difficult to eliminate this effect entirely.

Several groups have reported successful simultaneous measurements of refractive in-dex and depth using confocal [85] and transilluminating [86] low-coherence interferome-ters. These techniques could potentially be implemented to compensate this coherenceand focal plane divergence.

Unbalanced Chromatic Dispersion

The object medium of a translucent sample can also introduce considerable chromaticdispersion. Chromatic dispersion is the dependence of propagation speed on the op-tical frequency of the light. Since, for the purpose of low-coherence interferometry, abroad band source is used, light passing through dispersive media in the object armwill experience a variable delay, depending on its wavelength. If an identical dispersiveprocess is not present in the reference arm, i.e. the dispersion is not balanced, theobserved interference as a function of the optical path difference will deviate from thatobserved in a balanced system. Shibata et al. [87] predicted theoretically and confirmed

Page 77: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 64

experimentally that an unbalanced dispersive path in a two-beam interferometer leadsto a significant loss in the degree of coherence as well as broadening of the temporalcoherence profile. Both these effects have independent detrimental effects on the mea-surement accuracy and resolution. Broadening of the coherence profile can lead to areduction in axial resolution while a decreased interference amplitude further reducesthe maximum amount of interfaces which can be detected in a multilayer object (seesection 3.2.1).

The amount of dispersion depends on the interface of interest, and the compositionof the object. Since the dispersive effect increases with increasing axial distance intothe object, interfaces located at the back of the sample will be resolved less accuratelythan those closer to the front, effectively imposing a limit on the depth to which anobject can be investigated at a given resolution.

Polarisation

Interference of two beams is only possible if both contain electric field componentsparallel to each other. The direction of a field vector can be changed due to polarisationin the object medium. Three effects may affect the polarisation state of a beam.

• Dichroism

• Reflection

• Birefringence

Dichroism can be described as selective absorption, an effect which causes light ofa given linear polarisation state to be absorbed while its orthogonal state is transmit-ted [88]. Dichroism is most commonly exploited to construct linear polarisers, such aspolaroid films. Given that the illuminating beam in a two beam interferometer is unpo-larised, dichroism in the object medium may weaken the visibility of the interference,but will not cause complete signal loss.

Reflection and transmission through a medium is polarisation dependent if the angleof incidence is oblique [88]. Light reflected from one interface may lack an electric fieldcomponent which is present in the reference beam. In general, however, this effect issmall at small angles of incidence. Since the acceptance angle of the telecentric telescopeused in Coherence Radar is small and can be adjusted via an aperture stop (see figure 2.4on page 26), this effect may be reduced to a negligible level.

Birefringence causes light to propagate at a polarisation dependent velocity [88].Birefringent materials posses a slow and fast axes perpendicular to any propagationdirection. Electric field components parallel to the slow and fast axis propagate at twodiscrete velocities. This may cause two separate low-coherence interferograms to appear,separated by a distance corresponding to the relative optical path shift introduced be-tween the slow and fast propagation. Due to this, interference fading (if the relativephase shift between the fast and slow axis is a integer multiple of π) and coherenceprofile broadening can occur.

Static strain induced birefringence effects observed in fiberised low-coherence inter-ferometers can be compensated by polarisation control[89]. However, since birefringenceeffects may vary along the transverse extent of the sample, an implementation of this inbulk, CCD based interferometers is not practical. A more promising method has been

Page 78: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 65

presented [90] which allows simultaneous birefringence characterisation and ranging in afiberised low-coherence Mach-Zehnder interferometer by means of double detection. Itmay be possible to implement a similar arrangement in bulk, using two CCD detectors.

3.3 Experimental

Two transparent multilayer objects were measured using Coherence Radar [83]. Aninitial study concentrated on identifying boundary layers formed by a stack of glass 20plates, allowing a direct comparison with theoretical predictions presented section 3.2.2.The study of a second multilayer object, a damaged solar cell (retrieved from the HubbleSpace Telescope) demonstrates an application relevant to impact analysis and SpaceScience research.

3.3.1 Method

The work described in this chapter was completed using essentially the same experimen-tal system already described in section 2.3 on page 23. Although some modificationswere made to the imaging optics (in order to achieve a magnification suitable for individ-ual objects of interest) most modifications were made to the data processing software.The peak search process (described in chapter 2) was not implemented for the study ofmultilayer objects.

The main stages in the data processing implemented for translucent object imagingwere:-

• Acquisition of the intensity signal from the CCD camera at three different phasepositions.

• Processing of the three phase-stepped images to yield a measure of interference am-plitude at every lateral (x, y) position in the image (as described in section 2.2.1)

• Storage of the interference amplitude A(x, y) at object position d (peak searchomitted)

These processing steps are repeated for all object positions (d) along the optic axis(z), yielding a set of data with transverse (x, y) as well as axial (z) extent. Since volumedata cannot be represented adequately, figures in this chapter are displayed as x, y(transverse) or x, z (longitudinal) cross sections.

3.3.2 Investigation of 20 Glass Plates

In this section an experimental investigation of a stack of 20 glass plates is presented.A tomographic cross section perpendicular to the plane of the glass plates (x, z) al-lows measurement of plate thickness and separation and results are compared with thetheoretical model presented in section 3.2.2.

Using microscope cover slips and paper spacers a stack was constructed which retainsair spaces between the layers of glass. The design of this sample object closely reflectsthat assumed in the simulation of multilayer objects in section 3.2.2 and shown infigure 3.4.

Page 79: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 66

Transverse position

Objectposition

(µm)

Air

Air

1462

1540

1297

1217

1982

1828

1737

1043

959

793

637

254

404

481

177

0

Air

Air

Air

Air

Air

Glass 8 D = 102

Glass 7 D = 130

Glass 6 D = 109

Glass 5 D = 115

Glass 3 D = 103

Glass 4 D = 110

Glass 2 D = 99

Glass 1 D = 117

Figure 3.8: Interferogram of first 8 glass plates in a stack of 20

Page 80: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 67

0 500 1000 1500 2000 2500 3000 3500 4000 4500 50000.0

0.2

0.4

0.6

0.8

1.0in

terfe

renc

e am

plitu

de (a

rbitr

ary

units

)

object position (µm)

Figure 3.9: Average of interference amplitude versus depth (the amplitude is calculatedas an average of 10 neighbouring pixels)

For the measurement process the sample was mounted approximately perpendicularto the optic axis. Acquisition of a cross-section (along a line of 512 pixels) was obtainedby translating the object over a range of 6mm along the optic axis (z) in 1µm steps.CCD images were averaged 5 times prior to phase stepping in order to reduce noiseand improve contrast. A part of the collected data is displayed in figure 3.8, whichshows interference amplitude represented by grey-scale, such that large amplitudes aredarkest. By identifying the glass-air boundaries (seen as dark stripes in figure 3.8)values for optical path (OP) could be computed for each interface (shown on the leftside of figure 3.8). These OP values were then corrected for the refractive index of glass,assuming ng = 1.51, to yield a measure of true plate thickness, D (shown on the rightside of the figure 3.8). We note that ghost images of the boundaries are present due tomultiple reflections between the glass plates.

A plot of interference amplitude as a function of object position is shown in fig-ure 3.9. This data was computed by averaging the interference amplitude measured atten adjacent pixel positions along the transverse direction (see figure 3.8) in order toincrease the signal to noise ratio.

This data can be compared with the model developed in section 3.2.2. The empiri-cally determined interference amplitude, Ae can be expressed as

Ae(j) = kA(j) (3.14)

where A(j) is the interference amplitude predicted by the model in section 3.2.2 andk is the constant of proportionality relating to the conversion efficiency of the detector

Page 81: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 68

0 5 10 15 20 25 30 35 40 45

0.36788

1 peak values linear fit

inte

rfere

nce a

mpl

itude

(arb

itrar

y un

its)

interface number

Figure 3.10: Log of maximum interference amplitude, Ae(j), versus interface number, j

used in the experiment. Using equations 3.3 and 3.4, Ae can also be expressed as:

Ae(j) = 2k√

IrI0RT 2(j−1) (3.15)

where R is the reflectivity, T is the transmissivity (T = 1−R) and j is the interfacenumber (see figure 3.4). Taking logs of equation 3.15 gives a convenient linear modelfor comparison with the experimental data:

ln[Ae(j)] = ln

[

k2√IrI0R

T

]

+ j ln[T ] (3.16)

In figure 3.10 the natural logarithm of the experimentally obtained interference am-plitude values (peak values of data in figure 3.9) are plotted versus the interface numberj. A least square linear fit to the data yields ln[T ] = −0.0333. The transmissivity,T = 0.967 ± 0.002 may then be compared to a value independently derived using theFresnel reflection formula (at normal incidence) [84], such that T = 4n

(n+1)2= 0.957

where2 n = ng/na = 1.52. The two values of transmissivity are in good accordancewith each other and confirm the validity of the model. The observed discrepancy of1% may be attributed to scatter, absorption, multiple reflections, oblique incidence andnoise, all of which are neglected in the model.

2As given by the manufacturer: Chance Propper ltd, Smethwick, Warley, England, the index ofrefraction at λ = 546nm (Mg-line) is ne = 1.524 ± 0.002 and nd = 1.522 ± 0.002.

Page 82: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 69

3.3.3 Solar Cell

Measurements of impact crater damage to spacecraft caused by micrometeoroids inearth orbit yield information about impactor origin, composition and flux and are ofconsiderable interest to space science research (see also section 2.8). As part of themaintenance work carried out on the Hubble Space Telescope in December of 1993[66], a solar cell array was returned to earth after three and a half years in space andsubsequently became available for particle impact damage assessment. In comparison toconventional subjective optical inspection or electron microscopy imaging, low-coherenceinterferometry can reveal 3-D structures within the solar cell layers and damage notvisible from the surface.

Each solar cell in the array has a dimension of 21.1mm × 40.5mm. The samplemeasured by us contained a small crater of visible diameter ≈ 2mm penetrating thecover glass (figure 3.14). We also observed a crack in the cover glass running along theentire length of the cell and intersecting the crater area.

During the measurement the solar cell was mounted approximately perpendicularto the optical axis and aligned so that the crater appeared in the centre of the CCDimage. The solar cell was translated along a range of 450µm at 1µm steps over a periodof approximately one hour. Data was acquired (without averaging) using a 400 by 400pixel subset of the available 768 × 572 pixel image. The resulting 450 images describea volume comprising 400 by 400 by 450 elements (or voxels).

In principle, cross-sectional tomographic images can be extracted from the volumeof data at any conceivable angle. In order to visualise the layers composing the solar cellas well as the damage caused by the impact, a longitudinal section parallel to the opticaxis was extracted (as shown in figure 3.13). The transverse position of this cross sectionis indicated by a line in the image of the solar cell surface in figure 3.123. The cross-sectional slice can be seen in figure 3.13 and clearly shows the impact damage to thelayers of the solar cell. The layers which can be identified as dark bands of interferenceamplitude (a large interference amplitude is represented by dark areas), correspond tothe interfaces formed by the cover glass, adhesive and BSFR solar cell material. Theinterface position is indicated at the right of figure 3.13. By correcting for the absoluterefractive index of glass (ng ≈ 1.51) a measure of the CMX cover glass thickness wasderived (measured value= 145µm, known value = 150µm). For comparison a diagramof the cross sectional solar cell anatomy is shown in figure 3.14 [66].

3.4 Conclusion

We have demonstrated that Coherence Radar can provide useful information abouttransparent multi-layered structures at scales of a few microns. In order to gauge thepotential performance of Coherence Radar for such applications, a theoretical modelof an object comprising a stack of glass slides was formulated. Our model predictsan interference signal of poor visibility when investigating a large number of reflectivelayers. If the reflectivity of these layers is high, low-coherence measurements of theirlocation will be limited by the dynamic range of the detection system. Experimental

3An approximate intensity image, I(x, y) is derived by adding the interference amplitude A(x, y, d)

at all sample displacements, d, so that I(x, y) =∑N

i=1A(x, y, di)

Page 83: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 70

Figure 3.11: Extraction of a cross-sectional image from a set of transverse images

Page 84: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 71

Figure 3.12: Image of the Hubble Space Telescope solar cell showing the position of theextracted cross section relative to the impact site

measurements made on a similar physical model (consisting of 20 glass plates) showedclose agreement with the theoretical model.

A number of potential problems arising as a consequence of light propagation inthe object medium were identified and discussed. These effects of refraction, dispersionand birefringence were not observed to cause any appreciable decrease in the subjec-tive quality of any of the experimentally obtained cross sectional images. A damagedsolar cell retrieved from the Hubble space telescope was successfully analysed yieldinginformation about an impact crater not visible from the surface.

In conclusion, the system enables non-destructive testing of reflecting interface layersinside transparent objects and offers an attractive alternative to OCT and confocalmicroscopy.

Page 85: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 72

transverse position

cover glass

solar cell

adhesive

crater area0

219

247

219 (145)

obje

ct p

osi

tion

(mm

)

Figure 3.13: Tomographic image of solar cell (geometric distance is given as µm inparenthesis)

Page 86: 3-D Imaging using Optical Coherence Radar

CHAPTER 3. IMAGING OF MULTIPLE REFLECTING LAYERS 73

DC 93500 Adhesive (40 microns)

CMX cover glass (150 microns)

BSFR Solar Cell (250 microns)

RTV S691 (70-80 microns)

Glass fibre filled with DC 93500 (35 microns)

Silver Mesh (50 microns)

Glass fibre filled with DC 93500 (35 microns)

Figure 3.14: Schematic view of solar cell cross-section

Page 87: 3-D Imaging using Optical Coherence Radar

Chapter 4

In Vitro Imaging of the Human

Ocular Fundus

4.1 Introduction: Properties of the Human Fundus

Monitoring retinal thickness and retinal nerve fibre layer thickness can aid early diagno-sis and therapy control of macular degeneration, glaucoma, macular oedema and otheroptic neuropathies[91]. Because three dimensional structures are not easily revealedby 2-D images obtained from conventional fundus cameras and ophthalmoscopes, con-siderable interest in 3-D imaging techniques has developed in recent years. Althoughnon-optical methods such as Ultrasound and NMR allow three dimensional imagingof the human eye in vivo, the resolution these methodologies deliver is in general notsufficient for accurate fundus examinations[38]. Confocal microscopy and in particu-lar confocal laser scanning ophthalmoscopes (cSLO’s) have emerged as an attractivealternative due to their outstanding depth discrimination and scatter rejection capa-bilities. However, the depth resolution of cSLO studies of the fundus is limited by theeye aperture to approximately 200µm which corresponds roughly to the thickness ofthe retina. Low-coherence interferometry offers superior performance due to its aper-ture independent depth resolution and has been successfully applied to measurementsof eye length[35, 92], corneal thickness [41] and fundus thickness [91]. Optical coherencetomography (OCT), in particular, has been widely used to obtain 3-D in vivo fun-dus images [7, 32, 34, 38–40, 93–95] and ophthalmic OCT systems are now commerciallyavailable (see also section 1.4.3 on page 8).

In this chapter, we investigate the ability to obtain three dimensional images ofbiological tissue using CCD based low-coherence interferometry. In particular, we aimto demonstrate the feasibility of human fundus investigations using Coherence Radar[96]by obtaining in vitro images of a post-mortem human retina. To allow interferenceamplitude recovery a new algorithm which is robust in the presence of noise is developed.The possibility of obtaining high resolution in vivo images is discussed and opticaldesigns suitable for adapting the system to ocular measurements are presented.

4.1.1 The Human Eye

First, let us take a brief look at the anatomy of the human eye. Figure 4.1 outlinessome of the more important features. The cornea is the main refractive element and its

74

Page 88: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 75

Cornea (1.38)

Aqueoushumor (1.33)

Lens (1.40)

Vitreous humor (1.34)

Macula

Optic disk

Optic nerve

Sclera

Retina

Iris

Fovea

Choroid

Figure 4.1: Anatomy of the human eye (refractive index shown in parentheses)

shape is supported by a liquid in the cavity between the cornea and lens, called aqueoushumor. Even though the lens has a relatively high optical density, the surroundingmedium reduces its refractive power considerably. It can be deformed by the surroundingmuscles to yield a variable optical power required for vision. Similarly, the iris contractsor expands to control the amount of light entering the eye. The largest cavity in the eyeis filled with a gel-like substance called vitreous humor. Adjacent to this, at the backof the eye, lies the fundus.

The Fundus

The fundus, the posterior section of the eye, is comprised of a number of tissue layerswhich are shown in figure 4.2. The retina is perhaps the most important structure ofthese, since it is primarily responsible for human vision. The retina is composed oflight sensitive cells or photo-receptors, located underneath a layer of supporting blood-vessels and nerve fibres. The retina is terminated by a pigment layer which absorbs lighttransmitted through the photoreceptors in order to prevent backscatter. A special areaof the retina, called the macular, is situated at the centre of the fundus and receivesimages at optimal focus (figure 4.1). The fovea, a small central section of the macula,contains a very high concentration of photoreceptors yielding a resolution which givesus, amongst other things, the ability to read. Damage to this area may potentiallyresult in blindness and is of considerable interest to ophthalmologists. In the regionof the optic disk, nerve fibres connecting the retina and the supporting blood vesselscome together to form the optic nerve (figure 4.1). Since damage to the nerves in thisarea can have severe consequences on vision, the structure of the fundus in the region

Page 89: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 76

Inner limiting membrane

PhotoreceptorsPigment ephithelium

Choroidal stroma

Sclera

Retina

Bruch's membrane

Nerve fiber layer

Figure 4.2: Schematic representation of the fundus layers

of the optic disk (or optic nerve head) is also of increased medical interest. The retinais supported by the Choroid, a spongy tissue containing blood vessels, and the Sclera(white of the eye) which lines the outside of the eye. Both of these layers are much lesstransparent than the retina.

4.1.2 Human Fundus Sample and Tissue Preparation

We obtained several post-mortem human eyes1 allowing us to perform in vitro exper-imental investigations of the human ocular fundus using Coherence Radar. At thispreliminary stage, there are considerable advantages to using post-mortem as comparedto in vivo tissue. Involuntary eye movement, lens accommodation, fundus pulsationdue to blood flow and acquisition time constraints (due to patient discomfort) are alleliminated entirely in this way. Further, the lens and vitreous humor can be removedto create direct visual access to the fundus.

Unfortunately in vitro tissue is changed by physical effects following excision. Theseinclude tissue alterations due to lack of blood, the effects of decay, dehydration andtemperature change. It is thus, for example, difficult, if not impossible to maintainproper optical transparency and refractive power of the cornea and lens in a post-mortem eye. Also retinal detachment due to excision pressure is common and wasindeed observed in our samples.

One of the post mortem eyes was dissected such that a 1 by 1 cm fundus samplein the region of the optic nerve head could be removed. In order to prevent decay,the sample was stored in Formaldehyde solution. For the purpose of our investigationsusing Coherence Radar, a means had to be found to store the sample in the formalinso that it remained visible. This was accomplished by constructing a stainless steelcontainer bounded by a glass window, as shown in figure 4.3 and 4.4. A sample supportis provided and can be adjusted to hold the tissue firmly against the glass plate. The

1The author thanks Dr. Fred Fitzke of the Institute of Opthalmology, University of London for thesupply of postmortem tissue samples.

Page 90: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 77

Glass plate

Retinal tissue

O-ring sealThread

Formalin

sample support Stainless steel

Figure 4.3: Cross section of the fundus tissue container

Figure 4.4: Stainless steel sample container

Page 91: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 78

Figure 4.5: Fundus tissue in the sample container (scale graduation = 0.5 mm)

container is sealed to prevent leakage of liquid and the formation of air bubbles.The retina in our sample was detached in most areas except in the region immediate

to the optic nerve head. Several folds of retinal tissue formed as a consequence andthese are visible in the photograph of the fundus sample shown in figures 4.5.

4.1.3 Optical Properties of the Eye

In order to examine the structure of the fundus using optical methods, light must passthrough the ocular medium and parts of the fundus (figure 4.1). Thus, we will examinethe optical properties of the transparent section of the eye, such as cornea, crystallinelens, aqueous humor and vitreous humor as well as the properties of the fundus layers.

Ocular Medium

The following physical effects may influence low-coherence measurements of the fundus:

1. Refraction

2. Diffraction

3. Optical aberrations

4. Dispersion

Fundus examinations using conventional fundus cameras, laser scanning ophthal-moscopes, or low-coherence interferometry (such as OCT) are all affected by the staticrefractive power of the cornea and lens. In addition it is affected by the shape of thelens which can be changed by the action of the adjacent muscles to provide an image atoptimum focus. This involuntary action, also called accommodation, presents a poten-tial problem during in vivo examinations since it may introduce unpredictable changesin the optical path and position of the focal plane.

Page 92: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 79

In order to obtain images of the fundus, suitable optics must be introduced tocompensate for the static refractive power of the cornea and the lens (the focal lengthof the eye is ≈ 20mm). However, the most severe aberrations are introduced by the eyeitself and, together with diffraction, limit the resolution of fundus images. Generally, amaximum transverse resolution of 50µm cannot be exceeded with conventional imagingoptics.

The vitreous and aqueous humor are transparent water based solutions with a re-fractive index close to that of water. Thus, dispersion in the eye may be approximatedby dispersion in water. To our knowledge, no work has yet been reported which describesignificant dispersion effects on low-coherence measurements in the eye. However, in or-der to estimate the impact of dispersion in the vitreous humor, we have investigated theinfluence of dispersion in water on the coherence length of the source (see section 4.3.2).

4.1.4 Light Scattering in Biological Tissue

The problematic nature of imaging biological tissue using visible or near-infrared light ismainly due to light scattering by the cell structures in the tissue. Because of this, mostbiological tissue appears opaque on visual inspection, and conventional optical imagingtechniques are not suited to revealing structures within tissue. Scattering of light iscaused by small particles with the effect that photons follow a more or less randompath in the medium. The effect is a loss of image contrast, which is dependent on thenumber of scattering centres in the medium and the distance through which the lighttravels in the medium.

Apart from reducing image contrast, scatter also increases the path a photon travelsinside the medium. This is of special significance here because Coherence Radar effec-tively infers the position of a reflective interface by the optical path travelled. Figure 4.6illustrates the path of a photon in a scattering medium. Since the total path travelledby a photon between point a and b is equivalent to the distance between points b-c,the photon travelling from point a to point b will produce a coherent signal when thecoherence plane of the interferometer is adjusted to coincide with point c. According tothe optical path measured by the interferometer the reflection has occurred at point c,whereas in fact, the light was multiply scattered near the surface of the medium. Thus,a photon path like that shown in figure 4.6, will suffer a depth error roughly equal tothe distance b-c. The transverse accuracy will be reduced by an amount equal to thedistance a-b.

4.1.5 Illumination Wavelength

The choice of illumination wavelength is important for a number of reasons:

• The ocular medium is opaque to wavelengths ≥ 1200nm

• Fundus reflectance is higher at longer wavelength [97, 98]

• Light penetrates more deeply into the choroid at wavelengths above 650nm. [98].

• Absorption of light in the retina is less at longer wavelength - resulting in lessdamage at higher power [98].

• Patient discomfort due to high illumination power can be eliminated by usinginvisible wavelengths.

Page 93: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 80

a

b c

Figure 4.6: False path interpretation due to photon scattering in a diffusive medium

• The availability of affordable superluminescent diodes at 830nm due to singlemode fibre transmission windows

Given these factors, a wavelength of λ ≈ 830nm seems suitable for in vivo investiga-tions of the fundus. In vitro imaging on the other hand, is not bound by the transmissionproperties of the ocular medium, allowing a much longer wavelength to be employedin order to reduce scatter in the tissue and increase fundus transmission. In practicehowever, in vitro studies at 830nm promise to deliver a good preliminary indication ofhow suitable this illumination wavelength is and we have used this wavelength in ourexperimental investigations.

4.2 Signal Processing

The phase stepping algorithm presented in the previous chapters was highly susceptibleto mechanical vibration and image noise since it required accurate phase shifting of theinterference signal. Phase stepping was used to compute the interference visibility (oramplitude), based on three intensity images captured at a number of precise referencemirror positions (section 2.2.1 on page 22 ). The only computationally efficient methodfor noise reduction was to average these images before phase stepping (alternatively av-eraging after phase stepping requires repeated computation of the algorithm). However,when vibrations are present, this leads to a reduction in signal strength.

In this section, we present an improved phase stepping algorithm which computesinterference amplitude based on a large number of images captured at random phase,rendering the technique immune to mechanical vibration and reference mirror inaccura-cies. In addition, this effectively averages measurements and thus reduces the effects ofimage noise. The new algorithm is very similar to that presented in section 2.2.1, buthas the advantage of not requiring phase shifts at precisely calibrated steps. Instead,random movement of the reference mirror or even mechanical vibrations of sufficientamplitude can facilitate accurate measurements of interference amplitude.

As in section 2.2.1 on page 22 we may express an interference signal as a functionof object displacement, d, given by:

Page 94: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 81

I(d) = I +A(d) cos

[

2d2π

λ+ φ

]

(4.1)

where I is the background intensity, A(d) is the amplitude of the interference signal,λ is the central wavelength and φ is a random phase shift.

By a procedure similar to that described in section 2.2.1, it can be shown that theinterference amplitude, A(d), can be approximated using n intensity measurements, Ii,observed at a random phase φ, such that

A(d) =2∑n

i=1

∣Ii − I∣

n(4.2)

where

I = 1/nn∑

i

Ii. (4.3)

As in the previous chapters, a PZT mounted reference mirror is used to introduce therandom phase shifts. However, in our implementation, the voltage applied to the PZT isramped over the period of image acquisition to yield an approximate total displacementof λ/2. The phase shift between images is then approximately 2πλ/n, such that this newalgorithm approximates to the old one if n = 3 (see section 2.2.1). A similar algorithmhas also been implemented by Chiang et al. [99] for imaging of structures in scatteringmedia.

4.3 Experimental

4.3.1 Coherence Radar

The experimental system used for fundus imaging is identical to that described in chap-ter 2. However, in order to achieve greater illumination power a new superluminescentdiode (SLD-371) was used. This offers 2mW power at a wavelength of λ ≈ 830nm andwith a coherence length of lc ≈ 25µm.

4.3.2 Coherence Profile Broadening through Dispersion

As discussed in section 3.2.3 on page 63 chromatic dispersion in the object medium cancause coherence profile broadening and loss of fringe visibility. Since fundus examina-tions require light in the object beam to travel through the vitreous humor (≈ 2cm)significant dispersion is introduced. In order to assess the loss of axial resolution dueto profile broadening the effect of dispersion in a 2cm water path was measured. Theexperiment was performed by placing a water filled fundus container (figure 4.3) inthe object arm of the Coherence Radar interferometer, as shown in figure 4.8. Theinterference signal corresponding to the following reflections was recorded:

1. The front glass-air interface of the tissue container window

2. The second interface of the glass window in a water filled sample container (i.e.glass-water interface)

Page 95: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 82

40 50 60 70 80 90 100 110 120 130

20

40

60

80

100

120

140

160FWHM = 11.7 µm

FWHM = 11.9 µm

FWHM = 21.7 µm

air-glass glass-water (2mm glass) water-metal (2mm glass+20mm water) Gaussian fit

inte

rfer

ence

con

trast

(arb

itrar

y un

its)

object displacement (µm)

Figure 4.7: Plot of interference amplitude for dispersive and non-dispersive paths

Page 96: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 83

CC

D c

amer

a

Lens 1 (f1) Lens 2 (f2)

Collimating lens

Fiber source

SLD

Aperture stop

Translation stage

Translation stage

Reference MirrorPZT

Fundus tissue

Sample container

Beam-splitter

f1 f1 f2

Optic axis

f2

a

Neutral density filter

Figure 4.8: Experimental arrangement to avoid strong back-reflections at the air-glassboundary

3. The polished stainless steel tissue support in the water filled sample container (i.e.water-steel interface)

A dispersion free interferogram was acquired by placing the empty tissue containerin the object arm so that the glass window was perpendicular to the optic axis ofthe interferometer (maximising the returned light). The interference signal was thenmeasured by displacing the container in 2µm steps over a range sufficient to capturethe entire interferogram. Similarly, the signals returned from the back face of thewindow and from the curved metal-water interface at the back of the sample containerwere measured. For the latter, the container was rotated by an angle α in order toeliminate the air-glass reflections returned from the front and back of the containerwindow (figure 4.8).

The resultant three interferograms are shown in figure 4.7. Although dispersioneffects due to the glass window are not apparent, a considerable increase in coherencelength can be observed due to the path in water. The distance between the metal surfaceand the inside glass-water boundary is ≈ 20mm and the light experiences dispersionover a path twice this amount during a round trip. The least-square fits of a Gaussian tothe data in figure 4.7 indicate an almost 100% increase in the width of the interferogramdue to dispersion. Since light must travel a comparable distance in the eye (≈ 30mm),a similar broadening and thus a reduction in axial resolution can be expected for in vivo

fundus investigation.

Page 97: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 84

Figure 4.9: Orientation of longitudinal and transversal sections relative to the eye

4.3.3 Fundus Imaging of a Model Eye

As discussed in chapter 3, Coherence Radar is able to acquire transverse as well aslongitudinal sections of multilayer structures. Figure 4.9 illustrates the orientation ofthese sections relative to the optical system. In the following experiments, we havechosen to perform measurements of longitudinal sections since these best demonstratethe depth sectioning capabilities of Coherence Radar.

In order to assess the feasibility of in vivo fundus imaging using Coherence Radar,a model eye and a new optical arrangement were constructed to simulate imaging ofa human fundus in vivo. Figure 4.11 shows a Coherence Radar system adapted forthe investigation of a model eye, consisting of a lens and a model fundus (a stack ofglass plates). A correction lens (fc = 50mm) was used to compensate for refractionin the model eye lens (fe = 30mm). We suggest that this arrangement (as shown infigure 4.10) be used for fundus imaging in vivo.

After placing the model eye in the object arm of the interferometer, we were ableto successfully measure the position of reflecting interfaces in the model fundus. Thismodel was constructed from a stack of 20 microscope cover slips with air spaces betweenthem and is identical to the structure measured in section 3.3.2 on page 65. During themeasurement, the model eye (lens + microscope slips) was translated over a range of

Page 98: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 85

fe

CC

D c

amer

a

Lens 1 (f1) Lens 2 (f2)

Collimating lens

Fiber source

SLD

Aperture stop

Translation stage

Reference Mirror

PZT

Eye lens (fe)Correction lens (fc)

Beam-splitter

f1 f1 f2fefc

Optic axis

f2

Neutral density filter

fc

Figure 4.10: Experimental arrangement for in vivo imaging using Coherence Radar

fe

CC

D c

amer

a

Lens 1 (f1) Lens 2 (f2)

Collimating lens

Fiber source

SLD

Aperture stop

Translation

stage

Translation stage

Reference MirrorPZT

Multilayer

sample

(fundus)

Correction

lens (fc)

Beam-splitter

f1 f1 f2fefc

Optic axis

Eye

lens (fe)

Model eye

Neutral density filter

Figure 4.11: Experimental arrangement to image a model eye using corrective optics

Page 99: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 86

Figure 4.12: Longitudinal section of the model fundus

2.7 mm in 1µm steps. The resultant longitudinal cross section presented in figure 4.12clearly shows the positions of each of the glass-air boundaries2 in the stack and confirmsthat it is, in principle, possible to use Coherence Radar with corrective optics to allowfundus imaging.

However, we have found that back-reflections from the corrective lens and model eyelens can easily lead to detector saturation. We propose the use of anti-reflection coatedoptics in future implementations.

4.3.4 In Vitro Examination of Fundus Layers

Two longitudinal sections of post mortem fundus samples (section 4.1.2) were measuredusing Coherence Radar. To our knowledge these are the first fundus images obtainedusing a CCD based low-coherence interferometer [96]. The first cross-section was mea-sured by displacing the sample over a range of 4mm in 2µm steps along the optic axis(z). Ten intensity samples were used for phase stepping (n=10) at each object position(section 4.2). The orientation of the section plane relative to the tissue sample is in-dicated by figure 4.13 (longitudinal section 1) and the resulting cross-sectional imageis shown in figure 4.14. Although the low signal-to-noise conditions make the imagerather noisy, it is possible to identify two boundaries along the sample displacementaxis, corresponding to the retina and choroid. At the top centre of the image a foldingof the retina can be observed (compare to figure 4.13) and the thickness of the retina

2As in figure 3.8 ghost images are visible due to multiple reflections between the glass plates.

Page 100: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 87

Optic nerve head

Longitudinal section 2

Longitudinal section 1

Figure 4.13: Post mortem fundus tissue showing the approximate position of longitudi-nal sections obtained using Coherence Radar

is measured to be approximately 350µm (after correcting for the refractive index ofwater).

A second longitudinal section was obtained by displacing the fundus sample overa range of 5mm in 2µm steps. The orientation of the section plane relative to theretina is indicated in figure 4.13 (longitudinal section 2). The resulting image, whichwas computed using 20 intensity samples at each object position (n=20) is shown infigure 4.15. In comparison with figure 4.14, which was computed using n=10, thisimage shows visibly better contrast and confirms the ability of the new phase steppingalgorithm to reduce noise by using a large number of samples. It is possible to identifythe optic nerve head as well as reflections from the retina surface and other funduslayers in figure 4.15.

We attribute the presence of trailing shadows, which are visible in both images, tomultiple scattering within the tissue. In addition, we want to point out that the strongfringe pattern visible in figure 4.14 is due to beating between the CCD pixel readoutfrequency and the analogue to digital conversion sampling rate.

4.4 Discussion

4.4.1 Data Acquisition and Processing Speed

In vivo measurements of the fundus should be performed as quickly as possible inorder to reduce the effect of involuntary eye movements, fundus pulsation due to blood

Page 101: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 88

sam

ple

disp

lace

men

t (µm

)

transverse position (µm)0 500 1000 1500 2000 2500 3000 3500

0

500

1000

1500

2000

2500

3000

3500

Figure 4.14: Longitudinal section (1) of post mortem fundus tissue

Page 102: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 89

transverse position (µm)

sam

ple

disp

lace

men

t (µm

)

0 500 1000 1500 2000 2500 3000 3500

500

1000

1500

2000

2500

3000

3500

4000

4500

5000

Figure 4.15: Longitudinal section (2) of post mortem fundus tissue

Page 103: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 90

Start

Acquire image with xy pixels

Increment reference mirror position

Number of images = n ?

Compute interference amplitude

based on nxy valuesTranslate object

Store xy values

Cycle = N?

End

Yes

No

Yes

No

Figure 4.16: Operations performed by Coherence Radar

Page 104: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 91

flow and patient discomfort. The maximum acquisition speed of Coherence Radar isprimarily limited by the following operations:

• Computation of the interference amplitude

• Digital image acquisition

• Object translation between measurements

• Data storage

Figure 4.16 outlines the main operations performed by the Coherence Radar systemduring a measurement cycle. The total time required to complete this cycle is deter-mined by the performance of the individual hardware elements (such as computer, CCDcamera and frame grabber) and by the number of pixels (xy) per frame, the numberof object displacements (N) and the number of intensity samples (n) acquired at eachobject position and used for phase stepping.

4.4.2 Speed Optimisation

Let us examine the maximum speed at which fundus images in vivo may be obtained.The limiting factor in our implementation of Coherence Radar is the computationaloverhead associated with the calculation of the interference amplitudes during phasestepping. In principle, this may be eliminated by storing all data during the acquisitioncycle and performing the necessary computation later on. Continuous displacement ofthe object could also remove any time delay required for discrete object translation. Inaddition, imaging speed could be improved drastically by replacing the current imagingequipment with a high performance system. However, regardless of the experimentalhardware, the minimum CCD exposure time still remains a fundamental limit. Thisminimum exposure time is proportional to the power incident on each pixel and thesensitivity of the CCD sensor. In turn the incident optical power is determined by theillumination power of the eye, the fundus reflectivity, and the imaging system (includingthe beam-splitter ratio). In our experiments we were limited to a minimum exposuretime of 1/60 sec. at 2mW illumination power. Let us use this information as anapproximate indication of the amount of light reflected from the fundus and incident onthe CCD sensor. Assuming that the exposure time is proportional to the illuminationpower and camera sensitivity we may estimate the performance of the system undervarious conditions.

First, let us consider the maximum safe illumination power recommended for fun-dus examinations. According to the experimental laser dose which results in a 50%probability of creating a lesion, termed ED50, the damage threshold for retinal irradi-ance (at a wavelength of 400nm-700nm and an exposure duration of up to one second)is 1W/cm2. To remain on the conservative side, we will only assume a maximum ir-radiance of 100mW/cm2. In comparison, the maximum irradiance of our sample was8mW/cm2. Thus we may increase the illumination power by a factor of 100/8 = 12.5without causing damage to the retina.

The maximum power available from the SLD source is currently limited to 2mW .For future implementation we propose the use of a spatially incoherent extended source,which can deliver substantially more power. In addition, an extended source providesquasi confocality [100] and the benefit of increased scatter rejection.

Page 105: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 92

Pulnix TM-520 + Bit flow Data Raptor DALSA CL-C3 (line scan) DALSA CA-D1 (area scan)

Number of pixels 768 × 572 1024 256 × 256

Frame rate/line rate 25 Hz 1 kHz 110 Hz

Noise equivalent exposure (NEE) 371 pJ/cm2 125pJ/cm2 20pJ/cm2

Saturation equivalent exposure (SEE) 2.46nJ/cm2 125nJ/cm2 28nJ/cm2

Dynamic range (NEE:SEE) 8:1 1000:1 1400:1

Digital intensity resolution 8-bit 12-bit 12-bit

Table 4.1: Comparison of the current imaging hardware (see also appendix A) withcommercially available high performance components

A large proportion of the light reflected from the retina is lost due to the reflection atthe beam splitter plate surface. Unfortunately this effect cannot be eliminated entirely.It is however possible to increase the amount of light returned from the fundus at thecost of source power. Given a source of unlimited power, the beam-splitter transmis-sion/reflection ratio can be altered so that the transmission from the sample (the eye inthis case) to the detector is very large. By replacing a 50/50 beam-splitter with a 90/10beam-splitter for example, a 90/50 = 1.8 increase in the amount of light transmitted tothe detector can be obtained.

In addition, the exposure time may be reduced and the maximum frame/line ratecan be increased by implementing new imaging equipment. Since the requirementsfor transverse and longitudinal imaging are distinct, we have considered these two casesindependently. We propose the use of a line scan camera (1-D CCD) for the acquisition oflongitudinal sections and an area scan (2-D CCD) camera to achieve transverse sections(even though both can be achieved using a 2-D sensor). The Noise equivalent exposure(NEE) of a CCD camera defines the minimum detectable intensity. By comparing theNEE’s of the current Pulnix camera to that of potential high performance systems,we can derive a ratio between the current exposure time and a theoretical minimumvalue. Table 4.1 summarises the performance parameters of the current imaging system(Pulnix camera and Bit Flow frame grabber, see also appendix A) and two potentialhigh performance systems (DALSA CL-C3 and CA-D1 digital cameras).

The current minimum exposure time may in principle be reduced by a factor of12.5 if the illumination power is increased to the maximum safe threshold. A furtherimprovement of 1.8 can be gained by implementing a 90/10 beam splitter ratio. Inaddition the exposure time may be decreased through the use of high sensitivity CCDcameras.

Table 4.2 summarises the potential performance of Coherence Radar made possibleby the increased intensity, a new beam-splitter ratio and through the use of digitalDALSA line and area scan cameras (table 4.1). The acquisition time of both transverseand longitudinal images is limited by the frame or line rate of the CCD cameras. Thetotal amount of data acquired per second is larger when obtaining transverse sections.However, these values may change depending on the imaging system. Most impor-tantly, the table shows that Coherence Radar can in principle obtain both transverseand longitudinal sections of the human fundus in vivo with sufficient resolution anddynamic range in an interval no longer than a second, making it suitable for ophthalmicapplications and comparable in performance to state of the art OCT systems [94, 95].

Page 106: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 93

Longitudinal (using DALSA CL-C3) Transverse (using DALSA CA-D1)

Increase in sensitivity (compared to Pulnix TM-520) 2.97 18.6

Increase in reflected power (compared to current system) 22.5 22.5

Minimum exposure time ≈ 1/4000 s ≈ 1/25100 s

Maximum frame/line rate 1 kHz 110 Hz

Number of object positions (N) per second (at n=10) 100 (limited by line rate) 11 (limited by frame rate)

Number of sections per second 1 11

Data points per second 100 × 1024 256× 256 × 11

Table 4.2: Potential acquisition speed for longitudinal and transverse sections whenusing 10 intensity samples (n=10)

4.4.3 OCT versus CCD Based Interferometry

The strength of the CCD based approach lies in its parallel nature. While in an OCTsystem the surface is illuminated and imaged point by point in a sequential fashion,Coherence Radar illuminates an entire surface and measures the reflected light at allpixels simultaneously. For a beam scanning arrangement, the time required to measureN points is N times the minimum exposure time. For CCD based system the exposuretime is constant regardless of the number of pixels. Thus, if the required image hasNx byNy points or pixels, the resultant speed advantage of CCD based systems is theoreticallyequal to Nx × Ny, provided the total illumination power is also increased by the sameamount. In practice however, the acquisition speed is limited by the frame rate of theCCD video camera as was shown in the last section. Also, CCD detectors suffer frompoor dynamic range and low sensitivity when compared to single photo-detectors, andthus require longer exposure times at the same incident intensity. Therefor, while CCDbased detection may not offer a speed advantage as large as Nx×Ny in practice, one canexpect a performance at least comparable to that of OCT. In addition, OCT systemsgenerally employ a point detector and are thus confocal. This advantage is not availablewith CCD based systems, but schemes have recently been reported which allow confocalimaging even in these circumstances by using an extended source[100].

The comparison of Coherence Radar and OCT is further complicated by safetyconsiderations required for in vivo fundus investigations. As discussed earlier (sec-tion 4.1.5) damage threshold values are mainly determined according to power incidenton the retina per unit area. Thus, in principle, the power can be increased by the factorNxNy, since the illuminated surface area is increased proportionally. In practice how-ever, slightly less power per unit area is permitted in this case, since thermal energy isdissipated more slowly over a larger areas. The remaining acquisition time saving dueto the implementation of a CCD sensor (parallel imaging) is still substantial, as wasshown in the previous section.

In conclusion, the advantage of CCD based low-coherence interferometry over OCTlies in its ability to simultaneously image a large number of pixels in the transverseplane (en face imaging). Although transverse sections best exploit this advantage, themaximum frame rate of commercially available imaging devices limits the acquisitionspeed such that longitudinal sections may be acquired in a time comparable to that oftransverse images.

Page 107: 3-D Imaging using Optical Coherence Radar

CHAPTER 4. IN VITRO IMAGING OF THE HUMAN OCULAR FUNDUS 94

4.5 Conclusion

In this chapter, we have presented, what to the best of our knowledge, are the firstimages of in vitro fundus tissue obtained using a CCD based low-coherence method. Arobust algorithm for improved signal detection was introduced and successfully appliedto extract interference signals from images of low dynamic range and high noise. Themethod was shown to produce a significant increase in image contrast by means ofaveraging. The influence of scatter and dispersion in the eye were discussed and thereduction in depth resolution due to dispersion was experimentally determined. Theability to adapt Coherence Radar for in vivo measurements of the human eye wasdiscussed and confirmed experimentally by using a model eye.

Although our implementation of Coherence Radar was not suitable for in vivo mea-surement due to its slow acquisition speed and low image contrast, it was shown thatthe illumination power, and hence the acquisition speed could be significantly increasedwithout risking damage to the eye. We demonstrated that Coherence Radar is poten-tially capable of measuring both transverse and longitudinal sections in vivo in less thanone second - a performance better than state-of-the-art OCT systems.

In order to achieve this performance, the following improvement and modificationsof the existing system are required:

• A high sensitivity, low noise CCD camera & frame grabber

• A high power, low-coherence source (possibly discharge lamp)

• A quasi-confocal arrangement using extended source

Page 108: 3-D Imaging using Optical Coherence Radar

Chapter 5

Balanced Detection

5.1 Introduction

In chapter 3 we have shown that the detection of reflective interfaces in a multilayerobject using Coherence Radar is limited by the dynamic range of the CCD cameraand the analogue to digital (A/D) converter or frame grabber (see appendix A). Inthis chapter, we describe a new differential detection method for Coherence Radarwhich significantly reduces the required dynamic range of the A/D converter. A newexperimental system is implemented through the use of two line-scan CCD camerasand a Mach-Zehnder interferometer. We demonstrate the capabilities of this system bymeasuring the profile of a test surface and the location of several air-glass boundariesin a stack of glass plates [101].

5.2 Balanced Detection

It is well known that a Mach-Zehnder interferometer can provide two differential inter-ference signals which are out of phase by π radians [102]. Let us examine the interferenceproduced by a Mach-Zehnder interferometer such as depicted in figure 5.1. When usinga low-coherence source, the intensity observed by detector 1 and 2 is [88, 103]:

I1(d) = 1 + V γ(d) sin

(

d2π

λ

)

(5.1)

and

I2(d) = 1− V γ(d) sin

(

d2π

λ

)

(5.2)

respectively, where d is the position of a reflecting surface in the object arm, V isthe visibility at maximum coherence, γ(d) is the coherence function of the source andλ is the central wavelength. By subtracting the two photodetector outputs, we obtaina differential signal proportional to

I1(d) − I2(d) = 2V γ(d) sin

(

d2π

λ

)

(5.3)

and thus eliminate any common bias. In practice, to eliminate the effect of detec-tor response or gains variations between the two detectors, the signals are amplified

95

Page 109: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 96

Detector 1

Mirror 1

Mirror 2

Beamsplitter

Light Source

Collimating Lens

d

Detector 2

Object

Beamsplitter

Figure 5.1: Mach-Zehnder interferometer

Page 110: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 97

accordingly before subtraction to equalise the background intensity.This method is called balanced detection and has largely been used in fibre optic

sensing [104, 105] and in low-coherence imaging using beam scanning arrangements [35].In this chapter, we implement Coherence Radar using a Mach Zehnder interferometerand detect the differential signal with two CCD line-scan sensors. In order to maintainthe imaging properties of Coherence Radar, each point in the object plane is imagedidentically onto pixels in the image plane of the two CCD sensors. Both CCD camerasshould ideally be identical geometrically and have the same uniform response. However,even in the presence of asymmetries and non-uniform response we may reasonably expectthe method to yield superior performance when compared to conventional detection.

5.3 Dynamic Range

The dynamic range, R, of a detector is defined as the ratio of the maximum lightintensity at saturation (Is) to the light intensity that produces an output (In) equal tothe residual noise in the system (noise floor), so that

R =IsIn

. (5.4)

In Coherence Radar the required dynamic range, R of the CCD sensor and thesubsequent A/D conversion is determined by the object of interest. As in section 3.2.2on page 60, let us express the minimum dynamic range necessary to detect interference,in terms of the required ratio, S, between an interference signal (Imax − Imin) and thenoise floor (In), such that

R =SImax

Imax − Imin, (5.5)

where Imax − Imin/S = In and Imax = Is. The minimum required dynamic rangein the case of interference between reflections from a multilayer object, such as a stackof glass plates, and a reference mirror, was derived in section 3.2.2 on page 60 and isgiven by:

R(j) =S

2

[

Ir + It

2√

IrIc(j)+ 1

]

. (5.6)

where j is the interface number in a stack of n reflecting interfaces, S is the requiredratio of the signal to the noise floor, Ir is the reference beam intensity, It is the objectbeam intensity and Ic(j) is the intensity reflected from interface j.

Assuming that the reference intensity (Ir) is adjusted to equal the intensity reflectedfrom the object (It) equation 5.6 can be simplified to give:

R =S

2

[√

ItIc(j)

+ 1

]

. (5.7)

We now show that the required dynamic range of the A/D conversion can be sig-nificantly reduced by the use of differential detection. As shown by equation 5.3,Imax − Imin = Imax applies in general for a differential signal such that the requireddynamic range is always

Page 111: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 98

R = S. (5.8)

Thus using differential detection the dynamic range can be improved by a factor of

1

2

[

ItIc

+ 1

]

. (5.9)

However, from equation 5.9 it appears that no improvement can be expected whenmeasuring opaque surfaces (i.e. when It = Ic). We will now show that differentialdetection can be advantageous even in this case. When measuring objects of non-uniform reflectivity large intensity fluctuations may be present in the image, i.e. theincident intensity varies with pixel position x. Let us assume interference between lightreflected from a non-uniform opaque object (It(x)) and a reference beam of uniformintensity (Ir(x) = Ir). The interference at pixel position, x is then given by [88, 103]:

I(x, d) = It(x) + Ir + 2√

It(x)Irγ(d) sin

(

2d2π

λ

)

, (5.10)

where d is the position of the object, γ(d) is the coherence function of the low-coherence source, and λ is the central wavelength. Let us define the pixel positions xmax

and xmin such that It(xmax) = Itmax and It(xmin) = Itmin , where Itmax and Itmin arethe maximum and minimum object beam intensities respectively. Using equation 5.4,Imax(xmin)−Imin(xmin)

S = In and SImax(xmax), the dynamic range can now be expressedas:

R =SImax(xmax)

Imax(xmin)− Imin(xmin)(5.11)

where Imax(x) and Imin(x) are the maximum and minimum intensity incident onpixel x, respectively. Let us assume that Ir = Itmax (since this minimises the dynamicrange). Substituting equation 5.10 into 5.11 and assuming γ = 1, the required dynamicrange of an unbalanced system is given by:

R = S

Itmax

Itmin

. (5.12)

Assuming balanced detection, we obtain a differential signal (analogous to equa-tion 5.3) proportional to

I1(x, d) − I2(x, d) = 4√

It(x)Irγ(d) sin

(

2d2π

λ

)

. (5.13)

Using equation 5.11 and again assuming that γ = 1 and Ir = Itmax , we can showthat the required dynamic range in the case of differential detection is:

R =S

2

Itmax

Itmin

. (5.14)

This is an improvement of two over the unbalanced case (equation 5.12).

Page 112: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 99

5.4 Experimental System

Figure 5.2 illustrates our implementation of Coherence Radar with balanced detection.The system is based on a Mach-Zehnder interferometer but still retains the telecentrictelescope and collimated illumination as described in previous chapters. The collimatedillumination beam is reflected by the beam splitter onto the object of interest (whichis mounted on a translation stage) and is transmitted to reference mirror 1. Lightreturned from the object is imaged by a telecentric telescope in the object arm (lens 1and 2) onto two CCD line-scan cameras (Thomson, TH 7811A) via an additional beamsplitter. Light travelling in the reference arm (via mirror 1 and 2) is subject to identicalmagnification due to a telecentric telescope in the reference arm (lens 3 and 4) and isalso incident on both camera via the second beam splitter. Both mirror 1 and 2 as wellas the second telecentric telescope (lens 1 and 2) are mounted on a translation stage toallow the optical path length to be equalised initially. The electrical signals generatedby both CCD line-scan cameras are balanced and differentially amplified before beingdigitised by an A/D converter (with 8-bit precision). Due to the increased frame rateof the line-scan cameras, the sample object can be displaced at constant speed, andacquisition time can be reduced significantly.

5.5 Data Processing

The amplitude of the interference signal is recovered using a method which is in principleidentical to that described in section 4.2. However, phase shifts are induced exclusivelyby the continuous displacement of the object along the optic axis (no reference mirrorshift). The acquisition software was modified to allow post processing of data collectedin real time. As the sample object is displaced over a range of ∆d at a constant speed,v, data from the CCD’s is collected and stored continuously (see figure 5.3). The inten-sity is integrated over the exposure time of the CCD and measured at regular intervalscorresponding to the line rate, fl, of the CCD camera. Given that the object displace-ment during this exposure time is small, the intensity values, I(x, di) at pixel position,x, corresponds to regular discrete object positions, di. For a set of N subsequent ob-ject positions dj , ...dj+N (adjacent di object positions), an average intensity, I(x, dj) iscomputed:

I(x, dj) =

∑j+Ni=j Ii(x, di)

N. (5.15)

The interference amplitude is then approximated as:

A(x, di) =2∑j+N

i=j

∣Ii(x, di)− I(x, dj)∣

N. (5.16)

A(x, dj) characterises the interference amplitude and is independent of the incoher-ent background, ( I(x, dj)/N ). The image contrast can be improved and noise reducedby increasing N. However, the speed of the object translation stage, v, and the line rateof the CCD camera, fl, should be chosen so that the interference amplitude does notchange appreciably in the interval dj − dj +N , i.e. N ≪ lcfl

v , where lc is the coherencelength.

Page 113: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 100

Linear CCD camera

Lin

ear CC

D cam

era

+ -

SLD830nm

Fibre source

XYZ - stage

Translation stage

Translation stage

XY

Z - stag

e

Sample

Differential amplifier

Mirror 1

Mirror 2

Lens 1

Lens 2

ND-filter

Object andcoherence plane

Image plane

Imag

e pla

ne Aperture

stops

f1

f1

f2

f2

Collimating lens

fc

BS

BS

Lens 4

Lens 3

f3

f4

A/D converter8-bit 2 MHz

Computer

Frame Grabber

Figure 5.2: Experimental Arrangement implementing a balanced Coherence Radar tech-nique

Page 114: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 101

Start

Move object to initialposition

Digitise and store signalfrom CCD cameras

End

Start moving object at aconstant speed

Covereddepthrange?

Stop object translationstage

Compute interferenceamplitude

Yes

No

Figure 5.3: Data Processing

Page 115: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 102

0 100 200 300 400 500 600 700 80020

30

40

50

60

70

80

90

100

110

pixel number

inte

nsity

(8−

bit d

igita

l no.

)

Figure 5.4: Intensity variation along the CCD line-scan sensor

5.6 Experimental Results

Using the experimental arrangement in figure 5.2 we investigated a number of reflectivesample objects to assess the performance of balanced Coherence Radar.

Initially, we assessed the efficiency with which local reflectivity variations in theimage can be compensated by balanced detection. A test object with a periodic reflec-tivity variation (figure 5.9) was placed in the object arm of the interferometer while thereference arm was blocked. The intensity variation due to the object reflectivity wasmeasured using just one CCD camera and can be seen in figure 5.4. Then, using bothcameras and balanced detection, we measured the variations again. As the results in fig-ure 5.5 demonstrate, some residual intensity variations still remain. However, althoughperfect bias compensation could not be achieved, the amplitude of the variations arereduced by a factor of approximately two, compared to the unbalanced case.

A second experiment was performed which demonstrates the ability of the systemto measure interference. The intensity variations produced by displacing a flat mirrorover a range of 45 µm were measured and are presented as a false colour image infigure 5.7. This shows the sinusoidal variations due to the object displacement as afunction of the pixel position. Variations of fringe visibility (or amplitude of variations)along the object displacement axis are due to the low-coherence property of the source,while the variations along the pixel axis are due to non-uniform illumination. In this

Page 116: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 103

0 100 200 300 400 500 600 700 80010

15

20

25

30

35

40

45

50

pixel number

inte

nsity

(8−

bit d

igita

l no.

)

Figure 5.5: Remaining intensity variation after subtraction of signals from CCD line-scan 1 and 2

1 mm

200 mm

Figure 5.6: Anatomy of step structure

Page 117: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 104

object position (microns)

tran

sver

se p

ositi

on (

pixe

l no.

)

5 10 15 20 25 30 35 40

20

40

60

80

100

120

140

160

180

200

Figure 5.7: Interference produced by a flat mirror

Page 118: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 105

object position (microns)

tran

sver

se p

ositi

on (

pixe

l no.

)

5 10 15 20 25 30 35 40

20

40

60

80

100

120

140

160

180

200

Figure 5.8: Interference amplitude

experiment, the mirror was aligned approximately perpendicular to the optic axis of theinterferometer and displaced at a speed of 125 µm/sec. Since the line rate of the CCDcameras was 1000Hz the intensity was sampled every 125 nm, yielding the very goodfringe resolution seen in figure 5.7.

The effectiveness of the interference amplitude recovery algorithm (section 5.5) canbe seen in figure 5.8 which shows the data after the interference amplitude was com-puted using N=20. We can observe a good correspondence between the large amplitudevariation in figure 5.7 and the computed amplitude in figure 5.8.

To demonstrate the surface profiling capabilities of the system we constructed aperiodic step structure consisting of alternating mirror and metal surfaces. Figure 5.6illustrated how this structure was made using a mirror and a metal grating. The struc-ture was placed in the object arm of the system and the interference was measuredwhile translating the object over a range of 330 µm. The acquisition was performed inonly 2.2 seconds. A surface profile of the step structure was then achieved by findingthe object positions at which the maximum interference amplitude occurs. Figure 5.9,shows the interference amplitude computed using N=10, as well as the position of thesurface obtained using the peak search (indicated by a white line). The periodic stepstructure is clearly visible and measurements of the step height (≈ 200µm) and width(≈ 500µm) correspond well with the dimensions of the structure (figure 5.6).

Finally we demonstrate the ability of the balanced technique to measure multilayer

Page 119: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 106

0

50

100

150

200

250

transverse position (microns)

obje

ct p

ositi

on (

mic

rons

)

0 500 1000 1500 2000 2500 3000 3500

50

100

150

200

250

300

Figure 5.9: Interference amplitude and surface profile (white line) of periodic step

Page 120: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 107

0 50 100 150 200 250 300 3500

50

100

150

200

250

300

object position (microns)

inte

rfer

ence

con

tras

t (8−

bit d

igita

l no.

)

Figure 5.10: Interference amplitude peaks produced by air-glass reflections in a stack of20 glass plates

objects. This was investigated by measuring a stack of glass plates (as in section 3.3.2on page 65). The object, consisting of 20 microscope cover slips of 120 µm thickness,was placed in the object arm and translated over a range of 600 µm at a speed of 125µm/sec. The intensity was sampled with a CCD line-rate of 1500Hz, yielding 7200lines in just 4.8 seconds. Figure 5.10 shows the amplitude of the interference signal asmeasured for one pixel. We can clearly see the interference peaks associated with thereflections from the first four air-glass interfaces in the stack of glass plates.

5.7 Conclusion

In this chapter, we have demonstrated a new balanced technique which allows detectionof weak interference signals without using A/D converters of high dynamic range. Theadvantage gained was theoretically derived:

1. For the investigation of a multilayer object when a large fraction of the returnedlight is incoherent.

2. When the object of interest produces an image with a large variations of intensityfrom pixel to pixel.

Page 121: 3-D Imaging using Optical Coherence Radar

CHAPTER 5. BALANCED DETECTION 108

A new data acquisition and processing scheme was developed which allows contin-uous object movement and post processing, thus substantially reducing the requiredacquisition time.

In addition, the following capabilities of the balanced system were verified experi-mentally:

1. Reduction of static intensity variations across the detector when observing a sur-face (in the absence of interference).

2. Interference detection using differential signals from two CCD line-scan cameras.

3. Interference amplitude recovery using the new processing scheme.

4. Profiling of discontinuous surfaces.

5. Imaging of multilayer objects.

Page 122: 3-D Imaging using Optical Coherence Radar

Chapter 6

Conclusion

6.1 Summary

In this thesis, we have described optical Coherence Radar, a low-coherence interferome-try method which allows three dimensional imaging. We have used this method to studythe surface topography of opaque objects and the internal structure of translucent sam-ples and have explored a number of potential applications. In the past, methods similarin nature to Coherence Radar have been almost exclusively used to measure the surfacetopography of rough opaque objects. In this thesis we have demonstrated that, withsuitable modifications, it is possible use Coherence Radar to obtain tomographic im-ages or sections of translucent objects composed of several partially reflecting layers. Wehave successfully obtained longitudinal images of human fundus tissue, demonstratingthe potential of the technique as a clinical tool for fundus examinations. In addition, wehave demonstrated the use of balance detection in conjunction with two CCD line-scancameras, enabling the construction of high performance systems with low-cost, standardcomponents.

In chapter 2 the measurement of surface topography and in particular, the studyand analysis of hypervelocity impact craters using Coherence Radar was investigated.We have shown that the system can deliver topographic measurements with a depthaccuracy of about 2 µm. Coherence radar is ideally suited for measurements of roughsurfaces containing large discontinuities and steep walls where sub-micron accuracy isnot required. Large objects can be accommodated, and surface measurements over ar-eas of at least 10 by 10 cm may be obtained. Shadowing in the presence of steep wallsis avoided through the use of collimated illumination, making Coherence Radar suitablefor the inspection of mechanical parts containing drill holes and milled slots. Potentialapplications could also include monitoring of manufacturing tolerances, inspection ofgears, seals and injection moulds, assessment of deformation and quality assurance ofmany types of manufactured parts. The study of impact craters has demonstrated theapplicability of Coherence Radar in space science. In addition, the Coherence Radararrangement for such applications is opto-mechanically simple, robust and may be con-structed from standard low cost components.

In chapter 3 we have made, what is to our knowledge, the first successful attempts tomeasure the position of reflecting layers in objects which are partially transmitting usingCoherence Radar. By displaying the strength of the interference signal measured byCoherence Radar it was possible to locate the boundaries between materials of different

109

Page 123: 3-D Imaging using Optical Coherence Radar

CHAPTER 6. CONCLUSION 110

refractive index. In an initial experiment, we obtained a longitudinal tomographic imageof a number of thin glass plates arranged in a stack. A theoretical model of such anobject predicts a limit to the total number of such reflective interfaces which can bemeasured and we assume that these results generally apply even to objects constructedof less regular structures. A first application of multilayer imaging is the evaluation ofimpact damage sustained by a solar cell retrieved from space. A series of transversesections obtained using Coherence Radar show a crater which has penetrated severalsolar cell layers. Other potential applications may include surface measurements ofopaque structures embedded in or covered by transparent medium, measurement of layerthickness, deformation studies of transparent objects such as plastics, quality assuranceand monitoring of impurities and inclusions in manufactured parts.

In chapter 4 the measurement of the human fundus layers was investigated. Themeasurement of retinal thickness and shape is of particular interest to ophthalmology.However, high resolution measurements of the human fundus have, until now, beenobtained exclusively by beam scanning low-coherence methods such as OCT. We pro-pose that Coherence Radar can potentially offer an attractive alternative to OCT anddemonstrated this by obtaining two longitudinal section of post-mortem retinal tissue.The use of a CCD sensor offers speed, cost and stability advantages not enjoyed by OCTand other low-coherence systems. In addition, the experimental arrangement is simpleand robust. We suggest that the main potential of Coherence Radar may well lie inthe area of clinical ophthalmic imaging, where speed is essential. Coherence Radar canpotentially acquire a transverse (or en face) image section of the retina in less than 0.1seconds using eye safe illumination at 830nm. This is substantially faster than demon-strated by current OCT systems [94]. Thus, coherence radar may provides a convenientmeans of investigating the human fundus without the need for mechanical scanning.Although results were of low contrast and required long acquisition times, we estimatethat coherence radar can offer a significant time advantage over OCT if an improvedCCD camera and a high power SLD are implemented.

Chapter 5 describes a modified coherence radar system implementing the use of bal-anced detection, through the use of two CCD line-scan cameras and a Mach-Zehndertype interferometer. This technique significantly reduces the required dynamic range ofthe analogue-to-digital converter in the presence of a large number of highly reflectivelayers. Although the alignment of the two identical CCD detectors proved difficult wewere able to use the system to measure the step height of a periodic pattern and imple-ment a new measurement and data processing technique which significantly increasedthe acquisition of longitudinal sections.

6.2 Conclusion

Coherence Radar is a tool for surface measurements of opaque objects as well as for threedimensional imaging of partially transparent objects and, as such, covers similar appli-cations as low-coherence methods employing beam scanning arrangements. Comparedto competing techniques it offers significant advantages of opto-mechanical robustness,speed and simplicity. A number of factors currently limit the performance of the tech-nique. In particular, the relative poor scatter rejection and limited acquisition speedobtained using standard hardware. However, these factors could be addressed in futuredevelopments of the method.

Page 124: 3-D Imaging using Optical Coherence Radar

CHAPTER 6. CONCLUSION 111

6.3 Future Work

We suggest a number of solutions to these problems.Although the measurement of surface topography was on the whole satisfactory, the

acquisition speed and the effectiveness of the thresholding method could be improvedsubstantially. The acquisition speed was primarily limited by the computational over-head associated with phase stepping and thus could be increased by the use of fasterhardware such as dedicated signal processing units. In addition we suggest that a moreefficient and accurate algorithm be used for surface finding and the method of thresholdevaluation be improved.

The main problem in multilayer imaging, especially in conjunction with scatteringmaterials such as the fundus, is the low contrast of the images, and the slow acquisitionspeed. As has been already been shown by us, the required dynamic range of theanalogue to digital conversion may be reduce by the use of balanced detection. Inaddition, we expect that the availability of high performance imaging systems and morepowerful computers in the near future will eliminate many of the time constraints relatedto image acquisition and data processing speed.

Currently, the system performance is still limited by the lack of available high powerspatially filtered light sources and by the low scatter rejection of the optical arrangement.Both of these problems may potentially be reduced by the use of an extended source anda modified optical arrangement. As has been demonstrated by Sun et al. [100] the useof an extended source can provide a quasi confocality due to the low spatial coherence ofthe source. Extended sources are available at much less cost than SLD’s, do not requireexpensive power supplies and cooling units and supply very much more power. Furtherthe quasi confocality may reduce the appearance of speckle and significantly reduce theeffect of scatter in biological tissue.

Page 125: 3-D Imaging using Optical Coherence Radar

Appendix A

Digital Imaging System

This appendix describes the components of the digital imaging system employed in ourexperiments and discusses some of the parameters affecting its performance.

The imaging system used in this thesis consists of an analog CCD camera and aframe grabber. The frame grabber converts the analog video signal into a digital imageso that it can be stored and processed. The basic functions performed by a video cameraand frame grabber may be summarised as follows:

• analog signal processing in the camera

• transmission of data from camera to frame grabber

• analog to digital conversion

• data storage

A.0.1 CCD Sensor

A CCD sensor is a silicon based semiconductor which is divided into many small capac-itors. Photons incident on the silicon layer produce photo electrons which are collectedby the capacitors. Each picture element (pixel) is composed of several capacitors tofacilitate the movement of charge across the device. Once the device has been exposedto light, photo electrons which have accumulated in the capacitors can be transferredalong a row of adjacent pixels into a storage area of similar structure. This processcalled the readout clears the charges and allows a new exposure. In CCD’s, noise isintroduced mainly as a result of the readout process, but also as a result of thermalelectrons. The noise floor is an important parameter in determining the sensitivity anddynamic range of a CCD array. Noise may be reduced by decreasing the readout time(at the cost of frame rate) and temperature by cooling the CCD sensor. The size ofthe capacitor which collects the photoelectrons, the full well capacity, limits the amountof exposure before saturation. A large (full well) capacity and a low noise floor aredesirable since the dynamic range is determined by their ratio.

A.1 CCD Camera

CCD detectors offer a high sensitivity and a linear intensity response, but suffer froma poor dynamic range when compared to single photo detectors. A CCD video camera

112

Page 126: 3-D Imaging using Optical Coherence Radar

APPENDIX A. DIGITAL IMAGING SYSTEM 113

combines such a solid state sensor with suitable electronics for continuous readout andsignal conditioning. The supplied signal is usually analog and conforms to a standardvideo format such as CCIR or RS-170 which deliver interlaced images at a rate of 25-30full frames a second. The exposure time can be controlled by the readout process of theCCD and usually does not require a mechanical shutter. When operating according tothe CCIR standard, 50 interlaced images or half frames are read out from alternatingodd or even rows every second, limiting the maximum exposure time to 1/50 secondsper half frame. In order to adapt the camera to display optimum contrast on a tubemonitor, most video cameras offer a selectable non-linear intensity response, termedgamma correction. Also, the electronic gain of the signal can be increased in mostcameras, but invariably at the cost of increased noise.

A.1.1 The TM520 Video Camera

The Pulnix TM520 video camera employs a Sony ICX039BLA 1/2 inch CCD imageSensor and operates according to the CCIR monochrome video standard. The CCD has752(H) by 582(V) effective pixels and supports exposure times from 1/60 to 1/10000second. Gamma correct and gain settings can be configured internally. In order toobtain accurate intensity measurements the gamma correct was selected to deliver alinear response (gamma correct = 1). The gain setting was varied according to theapplication, but a low gain setting is preferable where possible, since it reduces thenoise.

A.1.2 The Thomson Linescan Camera

In chapter 5 two linear CCD cameras where used. Each was assembled from a ThomsonTH7811A linear CCD sensor and a Thomson TH 7931D drive module. The sensorshave 1728 pixels each (pixel size: 13 by 13 µm) and provide a dynamic range of 6000:1.The linear CCD cameras were able to operate at maximum line rate of approximately1kHz.

A.2 Frame Grabber

The general function performed by a frame grabber is analog signal conditioning, ana-log to digital conversion and data communication with a host computer. Some framegrabbers perform a variety of additional functions such as on board processing, displayand storage. The accuracy of the frame grabber affects the quality of the measurementsas it may introduce further noise, distortions and reduce the dynamic range. A framegrabber may be characterised primarily by its intensity resolution and its acquisitionspeed. There is usually a trade-off between the two. Since the majority of analog videocameras have a dynamic range of less than 1000 : 1, most frame grabbers digitise avideo signal with only 8-bit resolution, i.e. a maximum dynamic range of 256 : 1.

A.2.1 The Bit Flow Frame Grabbers

Two frame grabbers were used for work presented in this thesis. The systems describedin chapters 2 to 4 consisted of the Pulnix TM520 video camera and a Bit Flow ’VideoRaptor’ standard frame grabber. In chapter 5 this was replaced with a Bit Flow ’Data

Page 127: 3-D Imaging using Optical Coherence Radar

APPENDIX A. DIGITAL IMAGING SYSTEM 114

0 5 10 15 20 250

50

100Gaussian fit:std. = 8.5mean = 14.8

Freq

uenc

yco

unt

Intensity (8-bit DN)

Figure A.1: Noise distribution at maximum gain

Raptor’ which although identical in many respects has the additional capability tosyncronise and condition signals derived from non-standard video cameras, such as ourtwo Thompson Linescan cameras. Both frame grabbers digitise the video signal with8-bit accuracy. With a maximum clock speed of 40MHz they can perform 40 millionanalog to digital conversions per second i.e. acquire images with a total of 1 millionpixels per second. This allowed us to acquire 25 full frames per second of standard videoin real time. However, since the ’Video Raptor’ accepts only standard video signals andit’s clock speed is fixed it can not be adjusted to the number of pixels on the CCDsensor of the Pulnix camera. As a consequence the signal consisting of 752 pixels perline was oversampled to yield 768 values. Although no information is lost in this way,the physical location of CCD pixels does not correspond to the digitised images andaliasing may occur resulting in artifacts.

A.3 Noise

We measured the noise of the Pulnix TM520 Video camera and Bit Flow ’Video Raptor’by repeatedly recording the signal from one pixel in an image. A noise distribution wasthen derived using 1000 samples. Since the gain setting of the video camera affectsthe noise, the noise was investigated for both maximum and minimum gain settings.A low gain setting was used in the work presented in chapters 2 to 4, whereas a highgain setting was used for the investigation of the human fundus in chapter 5. The

Page 128: 3-D Imaging using Optical Coherence Radar

APPENDIX A. DIGITAL IMAGING SYSTEM 115

5 6 7 8 9 10 110

100

200

300

400

Gaussian fit:std. = 2mean = 9

Freq

uenc

yco

unt

Intensity (8-bit DN)

Figure A.2: Noise distribution at minimum gain

Page 129: 3-D Imaging using Optical Coherence Radar

APPENDIX A. DIGITAL IMAGING SYSTEM 116

Collimating lens

Fiber source

SLD

Detector

Power meter

Frame grabber

Computer

CCD camera

Figure A.3: Experimental configuration for the measurement of CCD camera sensitivity

distribution of noise for both low and high gain is shown in figures A.1 and A.2 andmean and standard deviation were evaluated using a Gaussian fit. We define the noisefloor as the mean plus standard deviation (14.8 + 8.5).

A.4 Sensitivity

We have also investigated the sensitivity of the Pulnix Video camera operating at ahigh gain setting. The experimental arrangement used for this is shown in figure A.3.The camera was illuminated with a superluminescent diode (SLD) emitting light at awavelength of 830nm. A calibrated power meter was used to measure the total power,P0, incident on the CCD sensor area. The detector was then removed from the beamand the CCD was exposed for 1/60 seconds. The resulting image was stored for furtheranalysis. The measurements were repeated for different illumination powers.

Since the power is not distributed evenly across the area of the CCD device, thepower incident per pixel cannot be derived simply by dividing the total power by theCCD sensor area. However, we observed that the intensity distribution approximatedto a Gaussian. Thus we can describe the power incident per pixel, P (x, y), as a two-dimensional Gaussian function of the form:

P (x, y) =1

2πσ2exp

[

−(x− x0)2 + (y − y0)

2

2σ2

]

(A.1)

where x and y are the pixel coordinates. The standard deviation, σ, is given by:

Page 130: 3-D Imaging using Optical Coherence Radar

APPENDIX A. DIGITAL IMAGING SYSTEM 117

0 1x10-5 2x10-5 3x10-5 4x10-5 5x10-5 6x10-5 7x10-5 8x10-5 9x10-5 1x10-40

20

40

60

80

100

120

140

160

180

200

220

240

260

Inte

nsity

(8-

bit D

N)

Power per pixel (nW)

Figure A.4: Sensitivity calibration (exposure time 1/60 second)

Type Units Value

Digital noise level DN 23.3

NEE pJ/cm2 371

Digital saturation level DN 255

SEE pJ/cm2 2456

Dynamic range 7:1

Table A.1: Digital imaging system performance determined experimentally at high gainsetting

Page 131: 3-D Imaging using Optical Coherence Radar

APPENDIX A. DIGITAL IMAGING SYSTEM 118

σ =r 1

e√2

(A.2)

where r 1e= x21

e

+ y21e

is the radius at which P (±x 1e,±y 1

e) = Pmax/e, and Pmax is the

maximum power per pixel. Sigma can be determined experimentally from the recordedimages by finding r 1

e. The maximum power per pixel is then given by:

Pmax =P0

2πσ2(A.3)

and corresponds to the peak numerical value in the digitised image. Figure A.4 wasderived by plotting 5 such pairs for different powers. A linear fit was performed toestimate the sensitivity. The relationship between the power incident per pixel, P , andthe digital output number (DN) is as follows:

DN = A+B × P (A.4)

where A = −18± 35DN/nW and B = 2.6 × 106 ± 5× 105DN/nW .Using the relationship between input power and output digital number (DN) as well

as the noise floor at high gain (figure A.1) the noise equivalent exposure (NEE) andsaturation equivalent exposure (SEE) can be determined. NEE is defined as the powerper unit area required to generate and output signal equal to the output noise level (23.3DN). This figure describes the lower limit on detectable light energy. SEE is the amountof power per unit area which produces an output equal to the saturation level (255 DN).The dynamic range is equivalent to the ratio between NEE and SEE. Values of SEE andNEE are quoted in pJ/cm2 determined by using the area of one pixel = 4.28×10−5cm2.

Page 132: 3-D Imaging using Optical Coherence Radar

Appendix B

Publications Arising from this

Thesis

B.1 Refereed Journal Papers

1. L. Kay, A. Podoleanu, M. Seeger, and C. J. Solomon. A new approach to themeasurement and analysis of impact craters. International Journal of Impact

Engineering, 19(8):739–753, 1996.

2. Adrian Gh. Podoleanu, Mauritius Seeger, George M. Dobre, David J. Webb, andDavid A. Jackson. Transversal and longitudinal images from the retina of theliving eye using low-coherene reflectometry. Journal of Biomedical Optics, 3(1),1997.

3. Adrian Gh. Podoleanu, George Dobre, Mauritius Seeger, David J. Webb, andDavid A. Jackson. Low-coherence interferometry for en-face imaging of the retina.Submitted to Laser and Light in Ophthalmology, 1997.

4. C.J.Solomon, M. Seeger, L. Kay, and J. Curtis. Automated compact parametricrepresentation of impact craters. Submitted to International Journal of ImpactEngineering, 1997.

B.2 Conference Papers

1. Mauritius Seeger, Adrian Podoleanu, Chris J. Solomon, and David A. Jackson.3-D low-coherence imaging for multiple-layer industrial surface analysis. In Con-

ference on Lasers and Electro-Optics, volume 9, page 328, Washington DC 20036-1023, June 1996. Optical Society of America. OSA Technical Digest Series.

2. Mauritius Seeger, Adrian Gh. Podoleanu, and David A. Jackson. Preliminary re-sults of retinal tissue imaging using coherence radar technique. In K.T.V.Grattan,editor, Applied Optics and Optoelectronics, pages 64–68, Techno House, RedcliffeWay, Bristol BS1 6NX, UK, September 1996. Institute of Physics, Institute ofPhysics Publishing.

119

Page 133: 3-D Imaging using Optical Coherence Radar

APPENDIX B. PUBLICATIONS ARISING FROM THIS THESIS 120

3. Mauritius Seeger, Adrian Gh. Podoleanu, and David A. Jackson. CCD based low-coherence interferometry using balanced detection. Submitted to the Conferenceon Lasers and Electro-Optics, 1998.

Page 134: 3-D Imaging using Optical Coherence Radar

Bibliography

[1] J. Jahanmir, B. G. Haggar, and J. B. Hayes. The scanning probe microscope.Scanning Microscopy, 6(3):625–660, 1992.

[2] J. F. Song and Theodore V. Vorburger. Stylus profiling at high resolution andlow force. Applied Optics, 30(1):42–50, 1991.

[3] Jean M. Bennett, Virgil Elings, and Kevin Kjoller. Recent developments in pro-filing optical surfaces. Applied Optics, 32(19):3442–3447, 1993.

[4] Christopher John Solomon. Studies of a Semiconductor-Based Compton Camera

for Radionuclide Imaging in Biology and Medicine. PhD thesis, Royal MarsdenHospital, Sutton, Surrey, September 1988.

[5] Paul.T.Callaghan. Principles of Nuclear Magnetic Resonance Microscopy. Claren-don, Oxford, 1991.

[6] Joseph W. Sassani and Mary D. Osbakken. Anatomic features of the eye disclosedwith nuclear magnetic resonance imaging. Arch Opthalmol, 102:541–546, 1984.

[7] Joseph A. Izatt, Michael R. Hee, David Huang, James G. Fujimoto, Eric A. Swan-son, Charles P. Lin, and Joel S. Shuman. Ophthalmic diagnostics using opticalcoherence tomography. In Ophthalmic Technologies III, volume 1877, pages 136–144. SPIE, 1993.

[8] Gerald V. Blessing, John A. Slotwinski, Donald G. Eitzen, and Harry M. Ryan.Ultrasonic measurements of surface roughness. Applied Optics, 32(19):3433–3437,1993.

[9] Donal B. Downey, David A. Nicolle, Morris F. Levin, and Aaron Fenster. Three-dimensional ultrasound imaging of the eye. Eye, 10:75–81, 1996.

[10] Tom H. Williamson and Alon Harris. Color doppler ultrasound imaging of theeye and orbit. Survey of Ophthalmology, 40(4):255–267, 1996.

[11] M. Okutomi and T. Kanade. A multiple-baseline stereo. IEEE Transactions on

Pattern Analysis and Machine Intelligence, 15(4):353–363, 1993.

[12] M. G. Gee and N. J. McCormick. The application of confocal scanning mi-croscopy to the examination of ceramic wear surfaces. Journal of Physics D:

Applied Physics, 25:A230–A235, 1992.

[13] David Shotton, editor. Electronic Light Microscopy: Techniques in Modern

Biomedical Microscopy, pages 231–246. Wiley-Liss, 1993.

121

Page 135: 3-D Imaging using Optical Coherence Radar

BIBLIOGRAPHY 122

[14] R. G. King and P. M. Delaney. Confocal microscopy. Materials Forum, 18:21–29,1994.

[15] D. S. Dilworth, E. N. Leith, and J. L. Lopez. 3-Dimensional confocal imaging ofobjects embedded within thick diffusing media. Applied Optics, 30(14):1796–1803,1991.

[16] J. V. Jester, P. M. Andrews, W. M. Petroll, M. A. Lemp, and H. D. Cavanagh.In vivo, real-time confocal imaging. Journal of Electron Microscopy Techniques,18(1):50–60, 1991.

[17] Robert H. Webb, George W. Huges, and Francois C. Delroi. Confocal scanninglaser ophthalmoscope. Applied Optics, 26(8):1492–1499, 1987.

[18] W. H. Woon, F. W. Fitzke, A. C. Bird, and J. Marshall. Confocal imaging of thefundus using a scanning laser ophthalmoscope. British Journal of Ophthalmology,76:470–474, 1992.

[19] Dov Weinberger, Hadas Stiebel, Dan D. Gaton, Ethan Priel, and Yuval Yassur.Three-dimensional measurements of idiopathic macular holes using a scanninglaser tomograph. Ophthalmology, 102(10):1445–1449, 1995.

[20] A. von Ruckmann, F. W. Fitzke, and A. C. Bird. Distribution of fundus autofluo-rescence with a scanning laser ophthalmoscope. British Journal of Ophthalmology,79:407–412, 1995.

[21] Masanori Idesawa, Toyohiko Yatagai, and Takashi Soma. Scanning moire methodand automatic measurement of 3-D shapes. Applied Optics, 16(8):2152–2162, 1977.

[22] Mitsuo Takeda, Hideki Ina, and Seiji Kobayashi. Fourier-transform method offringe-pattern analysis for computer-based topography and interferometry. J. Opt.

Soc. Am., 72(1):156–160, 1982.

[23] Mitsuo Takeda and Kazuhiro Mutoh. Fourier transform profilometry for the au-tomatic measurement of 3-D object shapes. Applied Optics, 22(24):3977–3982,1983.

[24] Katherine Creath. Step height measurement using two-wavelength phase-shiftinginterferometry. Applied Optics, 10(9):2113, 1971.

[25] J. C. Wyant. Testing aspherics using two-wavelength holography. Applied Optics,10(9):2113, 1971.

[26] Hashim Atcha. Optoelectronic Speckle Pattern Interferometry. PhD thesis, Cran-field University, December 1994.

[27] Deepak Uttamchandani and Ivan Andonovic, editors. Principles of Modern Opti-

cal Systems, Volume 2. Artech, 1992.

[28] E. A. Swanson, D. Huang, M. R. Hee, J. G. Fujimoto, C. P. Lin, and C. A.Puliafito. High-speed optical coherence domain reflectometry. Optics Letters,17(2):151–153, 1992.

Page 136: 3-D Imaging using Optical Coherence Radar

BIBLIOGRAPHY 123

[29] Thomas Dresel, Gerd Hausler, and Holger Venzke. Three-dimensional sensing ofrough surfaces by coherence radar. Applied Optics, 31:919, 1992.

[30] Leslie Deck and Peter de Groot. High-speed noncontact profiler based on scanningwhite-light interferometry. Applied Optics, 33(31):7334, 1994.

[31] Eric A. Swanson, Michael R. Hee, Guillermo J. Tearney, and James G. Fujimoto.Application of optical coherence tomography in non-destructive evaluation of ma-terial microstructure. In Conference on Lasers and Electro-Optics, volume 9,pages 326–327, Washington DC 20036-1023, June 1996. Optical Society of Amer-ica. OSA Technical Digest Series.

[32] E. A. Swanson, J. A. Izatt, M. R. Hee, D. Huang, C. P. Lin, J. S. Schumann,C. A. Puliafito, and J. G. Fujimoto. In vivo retinal imaging by optical coherencetomography. Optics Letters, 18(21):1864–1866, 1993.

[33] Gerd Hausler and Jochen Neumann. Coherence Radar - an accurate 3-D sensorfor rough surfaces. In Optics, Illumination and Image Sensing for Machine Vision

VII, volume 1822, pages 200–205. SPIE, 1992.

[34] Adrian Gh. Podoleanu, George M. Dobre, David J. Webb, and David A. Jackson.Fiberised set-up for retinal imaging of the living eye using low coherence inter-ferometry. In Biomedical Applications of Photonics, Savoy Place, London WC2R0BL, UK, April 1997. IEE, The Institution of Electrical Engineers. ReferenceNumber: 1997/124.

[35] Adrian Gh. Podoleanu, George M. Dobre, David J. Webb, and David A. Jackson.Fiberised set-up for eye-length measurement. Optics Communications, 137:397–405, 1997.

[36] X. Clivaz, F. Marquis-Weible, R. P. Salathe, R. P. Novak, and H. H. Gilgen. High-resolution reflectometry in biological tissues. Optics Letters, 17(1):4–6, 1992.

[37] Adrian Gh. Podoleanu, George M. Dobre, David J. Webb, and David A. Jackson.Coherence imaging by use of a newton rings sampling function. Optics Letters,21(21):1789–1791, 1996.

[38] Carmen A. Puliafito, Michael R. Hee, Charles P. Lin, Elias Reichel, Joel S. Schu-man, Jay S. Duker, Joseph A. Izatt, Eric A. Swanson, and James G. Fujimoto.Imaging of macular diseases with optical coherence tomography. Ophthalmology,102(2):217–229, 1995.

[39] Joseph A. Izatt, Michael R. Hee, Eric A. Swanson, Charles P. Lin, David Huang,Joel S. Schuman, Carmen A. Puliafito, and James G. Fujimoto. Micrometer-scaleresolution imaging of the anterior eye in vivo with optical coherence tomography.Arch Opthalmol, 112:1584–1589, 1994.

[40] Michael R. Hee, Carmen A. Puliafito, Carlton Wong, Elias Reichel, Jay S. Duker,Joel S. Schuman, Eric A. Swanson, and James G. Fujimoto. Optical coherencetomography of central serous chorioretinopathy. American Journal of Ophthal-

mology, 120(1):65–74, 1995.

Page 137: 3-D Imaging using Optical Coherence Radar

BIBLIOGRAPHY 124

[41] Christoph K. Hitzenberger. Measurement of corneal thickness by low-coherenceinterferometry. Applied Optics, 31(31):6637–6642, 1992.

[42] Stephen A. Boppart, Gary J. Tearney, Brett Bouma, James G. Fujimoto, andMark E. Brezinski. Optical coherence tomography of developing embryonic mor-phology. In Conference on Lasers and Electro-Optics, volume 9, pages 55–56,Washington DC 20036-1023, June 1996. Optical Society of America. OSA Tech-nical Digest Series.

[43] Mark Bashkansky, M. D. Duncan, Manfred Kahn, J. Reintjes, and Phillip R.Battle. Subsurface defect detection in ceramic materials using an optical gatedscatter reflectometer. In Conference on Lasers and Electro-Optics, volume 9, pages327–328, Washington DC 20036-1023, June 1996. Optical Society of America. OSATechnical Digest Series.

[44] D. N. Wang, S. Chen, K. T. V. Grattan, and A. W. Palmer. A low coherence’white light’ interferometric sensor for eye length measurement. Rev. Sci. Instrum.,66:3438, 1995.

[45] A. Gh. Podoleanu, S. R. Taplin, D. J. Webb, and D. A. Jackson. Channelled spec-trum display using a CCD array for student laboratory demostartions. EuropeanJournal of Physic, 15:266–271, 1994.

[46] H. Perrin, P. Sandoz, and G. Tribillion. Longitudinally dispersive profilometer.Pure Applied Optics, 4:219, 1995.

[47] J. Schwider and Liang Zhou. Dispersive interferometric profilometer. Optics

Letters, 19(13):995, 1994.

[48] Thomas M. Merklein. High resolution measurement of multilayer structures. Ap-plied Optics, 29(4):505, 1990.

[49] W. Linnik. Ein apparat fur mikroskopisch-interferometrische untersuchung reflek-tierender objekte (mikrointerferometer). Akad. Nauk. SSSR Dokl., 1:18, 1933.

[50] Mark Davidson, Kalman Kaufman, Isaac Mazor, and Felix Cohen. An applicationof interference microscopy to integrated circuit inspection and metrology. In Proc.

of Integrated Circuit Metrology, Inspection, and Process Control, volume 775, page233. SPIE, 1987.

[51] James C. Wyant and Katherine Creath. Advances in interferometric optical pro-filing. Int. J. Machine Tools and Manufacture, 32(1–2):5–10, 1992.

[52] Gordon S. Kino and Stanley S. C. Chim. Mirau correlation microscope. Applied

Optics, 29(26):3775, 1990.

[53] S. M. Pandit and N. Jordache. Data-dependent-systems and fourier-transformmethods for single-interferogram analysis. Applied Optics, 34(26):5945–5951, 1995.

[54] Z. Wang and P. J. Bryanston-Cross. An algorithm of spatial phase-shifting inter-ferometry. In K.T.V.Grattan, editor, Applied Optics and Optoelectronics, pages64–68, Techno House, Redcliffe Way, Bristol BS1 6NX, UK, September 1996. In-stitute of Physics, Institute of Physics Publishing.

Page 138: 3-D Imaging using Optical Coherence Radar

BIBLIOGRAPHY 125

[55] J. C. Wyant, B. F. Oreb, and P. Hariharan. Testing aspherics using two-wavelength holography: use of digital electronic techniques. Applied Optics,23(22):4020–4023, 1984.

[56] Paul J. Caber. Interferometric profiler for rough surfaces. Applied Optics, 32:3438,1993.

[57] Hong Zhao, Wonyi Chen, and Yuahan Tan. Phase-unwrapping algorithm for themeasurement of three-dimensional object shapes. Applied Optics, 33(20):4497–4500, 1994.

[58] N. Balasubramanian. Optical system for surface topography measurement. Tech-nical Report 4340306, United States Patent, July 1982.

[59] M. Davidson, K. Kaufman, and I. Mazor. The coherence probe microscope. SolidState Technology, page 57, 1987.

[60] Stanley S. C. Chim and Gordon S. Kino. Three-dimensional image realization ininterference microscopy. Applied Optics, 31:2550–2553, 1992.

[61] P. de Groot and Leslie Deck. Three-dimensional imaging by sub-nyquist samplingof white-light interferograms. Optics Letters, 18(17):1462, 1993.

[62] Peter de Groot and Leslie Deck. Surface profiling by analysis of white-light inter-ferograms in the spacial frequency domain. Journal of Modern Optics, 42(2):389–401, 1995.

[63] Zygo industry applications. Webpage, 1997. http://www.zygo.com/.

[64] Born and Wolf. Principles of Optics, pages 767–772. Pergamon Press, sixt edition,1993.

[65] Patrick Sandoz and Gilbert Tribillon. Profilometry by zero-order interferencefringe identification. Journal of Modern Optics, 40(9):1691–1700, 1993.

[66] Hajime Yano. The Physics and Chemistry of Hypervelocity Impact Signatures on

Spacecraft: Meteoroid and Space Debris. PhD thesis, The University of Kent atCanterbury, Canterbury, Kent, UK, September 1995.

[67] A. S. Levine, editor. First Post-Retrieval Symposium, volume 3134 of LDEF-69

Month in Space. NASA, 1991.

[68] A. S. Levine, editor. Second Post-Retrieval Symposium, volume 3194 of LDEF-69

Month in Space. NASA, 1992.

[69] A. S. Levine, editor. Third Post-Retrieval Symposium, volume 3275 of LDEF-69

Month in Space. NASA, 1993.

[70] D. C. Hill, M. F. Rose, S. R. Best, and M. S. Crumpler. The effect of impact angleon craters formed by hypervelocity particles. In Third Post-Retrieval Symposium,volume 3275 of LDEF-69 Month in Space. NASA, 1993.

[71] R. J. Noll. Zernike polynomials and atmospheric turbulence. J. Opt. Soc. Am.,66:207–211, 1976.

Page 139: 3-D Imaging using Optical Coherence Radar

BIBLIOGRAPHY 126

[72] L. Kay, A. Podoleanu, M. Seeger, and C. J. Solomon. A new approach to themeasurement and analysis of impact craters. International Journal of Impact

Engineering, 19(8):739–753, 1996.

[73] C.J.Solomon, M. Seeger, L. Kay, and J. Curtis. Automated compact parametricrepresentation of impact craters. Submitted to International Journal of ImpactEngineering, 1997.

[74] Laurie Kay. Development of a new method for the measurement, analysis anditerpretaion of impact craters. Grant Application made to the Particle Physicsand Astronomy Research Council, September 1997.

[75] S. Chen, A. W. Palmer, K. T. V. Grattan, and B. T. Meggit. Fringe orderidentification in optical fibre white-light interferometry using centroid algorithmmethod. Applied Optics, 28(6):553–555, 1992.

[76] Masao Shimoji. Analysis of a conical optical beam deflector insensitive to motorwobble. Applied Optics, 34(13):2305–2315, 1995.

[77] Yajun Li and Joseph Katz. Laser beam scanning by rotary mirrors. I. modellingmirror-scanning devices. Applied Optics, 34(28):6403–6415, 1995.

[78] P. J. Brosens. Dynamic mirror distortions in optical scanning. Applied Optics,11(12):2987–2989, 1972.

[79] R. Hradaynath and A. K. Jaiswal. Distortion in a 2-D scan patter generatedby combining a plane mirror and a regular polygon scanner. Applied Optics,22(4):615–619, 1983.

[80] M. Bail, Gerd Hausler, J. H. Herrmannn, M. W. Linder, and R. Ringler. Opticalcoherence tomogrpahy with the ”Spectral Radar” - Fast optical analysis in volumescatterers by short coherence interferometry. volume 2925, pages 298–303. SPIE,1996.

[81] H. Brunner, J.Strohm, M. Hassel, and R. Steiner. Optical coherence tomography(OCT) of human skin with a slow-scan CCD-camera. volume 2626, pages 273–282.SPIE, 1995.

[82] Eric A. Swanson. Method and apparatus for acquiring images using a CCD de-tector array and no transverse scanner. Technical Report 5465147, United StatesPatent, Nov 1995.

[83] Mauritius Seeger, Adrian Podoleanu, Chris J. Solomon, and David A. Jackson. 3-Dlow-coherence imaging for multiple-layer industrial surface analysis. In Conference

on Lasers and Electro-Optics, volume 9, page 328, Washington DC 20036-1023,June 1996. Optical Society of America. OSA Technical Digest Series.

[84] Born and Wolf. Principles of Optics, page 42. Pergamon Press, sixt edition, 1993.

[85] Takashi Fukano and Ichirou Yamaguchi. Simultaneous measurement of thicknessand refractive indices of multiple layers by a low-coherence confocal microscope.Optics Letters, 21(23):1942–1944, 1996.

Page 140: 3-D Imaging using Optical Coherence Radar

BIBLIOGRAPHY 127

[86] W.V. Sorin and D.F.Gray. Simultaneous thickness and group index measurementsusing optical low-coherence reflectometry. IEEE Photonics Technology Letters,4(1):105–107, 1992.

[87] Nori Shibata, Makoto Tsubokawa, Takashi Nakashima, and Shigeyuki Seikai.Temporal coherence properties of a dispersive propagating beam in a fiber-opticinterferometer. J. Opt. Soc. Am. A, 4:494–497, 1987.

[88] Eugene Hecht, editor. Optik. Addison-Wesley, 1989.

[89] A. D. Kersey, M. J. Marrone, A. Dandrige, and A. B. Tveten. Optimizationand stabilization of visibility in interferometric fiber-optic sensors using input-polarization control. Journal of Lightwave Technology, 6(10):1599–1609, 1988.

[90] Michael R. Hee, David Huang, Eric A. Swanson, and James G. Fujimoto.Polarization-sensitive low-coherence reflectometer for birefringence characteriza-tion and ranging. J. Opt. Soc. Am. B, 9(6):903–908, 1992.

[91] Wolfgang Drexler, Christoph K. Hitzenberger, Harald Sattmann, and Adolf F.Fercher. Measurement of the thickness of fundus layers by partial cohrence to-mography. Optical Engineering, 34(3):701–709, 1995.

[92] A. F. Fercher, K. Mengedoht, and W. Werner. Eye-length measurements byinterferometry with partially coherent light. Optics Letters, 14:186–188, 1988.

[93] Michael R. Hee, Joseph A. Izatt, Eric A. Swanson, David Huang, Joel S. Schuman,Charles P. Lin, Carmen A. Puliafito, and James G. Fujimoto. Optical coherencetomography of the humam retina. Arch Opthalmol, 113:325–332, 1995.

[94] Adrian Gh. Podoleanu, Mauritius Seeger, George M. Dobre, David J. Webb, andDavid A. Jackson. Transversal and longitudinal images from the retina of theliving eye using low-coherene reflectometry. Journal of Biomedical Optics, 3(1),1997.

[95] Adrian Gh. Podoleanu, George Dobre, Mauritius Seeger, David J. Webb, andDavid A. Jackson. Low-coherence interferometry for en-face imaging of the retina.Submitted to Laser and Light in Ophthalmology, 1997.

[96] Mauritius Seeger, Adrian Gh. Podoleanu, and David A. Jackson. Prelimi-nary results of retinal tissue imaging using Coherence Radar technique. InK.T.V.Grattan, editor, Applied Optics and Optoelectronics, pages 64–68, TechnoHouse, Redcliffe Way, Bristol BS1 6NX, UK, September 1996. Institute of Physics,Institute of Physics Publishing.

[97] Francois C. Delori and Kent P. Pflibsen. Spectral reflectance of the human ocularfundus. Applied Optics, 28(6):1061–1077, 1989.

[98] Francois C. Delori and Stephen A. Burns. Fundus reflectance and the measurentof crystaline lens density. J. Opt. Soc. Am. A, 13(2):215–226, 1996.

[99] Hai-Pang Chiang, Wei-Sheng, and Jyhpyng Wang. Imaging through random scat-tering media by using cw broadband interferometer. Optics Letters, 18(7):546–548,1993.

Page 141: 3-D Imaging using Optical Coherence Radar

BIBLIOGRAPHY 128

[100] P. C. Sun and E. Arons. Nonscanning confocal ranging system. Applied Optics,34(7):1254–1261, 1995.

[101] Mauritius Seeger, Adrian Gh. Podoleanu, and David A. Jackson. CCD based low-coherence interferometry using balanced detection. Submitted to the Conferenceon Lasers and Electro-Optics, September 1998.

[102] Brian Culshaw and John Dakin, editors. Optical Fibre Sensors: Systems and

Application Volume 2. Artech House, 1989.

[103] Born and Wolf. Principles of Optics. Pergamon Press, sixt edition, 1993.

[104] W. V. Sorin, D. M. Barns, and S. A. Newton. Optical low-coherence reflectometrywith -148 dB sensitivity at 1.55 µm. In Eighth International Conference on Optical

Fibre Sensors, pages 1–4, 1993.

[105] K. Takada, A. Himeno, and K. Yukimatsu. Phase-noise and shot-noise limitedoperation of low coherence optical time domain reflectometry. Applied Physics

Letters, 59(20):2483–2485, 1991.