field guide to infrared systems

136

Upload: nemanja-arandjelovic

Post on 10-Dec-2015

66 views

Category:

Documents


5 download

DESCRIPTION

.

TRANSCRIPT

Page 1: Field Guide to Infrared Systems
Page 2: Field Guide to Infrared Systems

InfraredSystems

Field Guide to

Arnold Daniels

SPIE Field GuidesVolume FG09

John E. Greivenkamp, Series Editor

Bellingham, Washington USA

Page 3: Field Guide to Infrared Systems

Library of Congress Cataloging-in-Publication Data Daniels, Arnold. Field guide to infrared systems / Arnold Daniels. p. cm. -- (The Field guide series ; no. 1:9) Includes bibliographical references and index. ISBN 0-8194-6361-2 (alk. paper) 1. Infrared technology--Handbooks, manuals, etc. I. Title. II. Series: Field guide series (Bellingham, Wash.) ; no. 1:9. TA1570.D36 2006 621.36'2--dc22 2006015467

Published by SPIE—The International Society for Optical Engineering P.O. Box 10 Bellingham, Washington 98227-0010 USA Phone: +1 360 676 3290 Fax: +1 360 647 1445 Email: [email protected] Web: http://spie.org Copyright © 2007 The Society of Photo-Optical Instrumentation Engineers All rights reserved. No part of this publication may be reproduced or distributed in any form or by any means without written permission of the publisher. The content of this book reflects the work and thought of the author. Every effort has been made to publish reliable and accurate information herein, but the publisher is not responsible for the validity of the information or for any outcomes resulting from reliance thereon. Printed in the United States of America.

Page 4: Field Guide to Infrared Systems

Introduction to the Series

Welcome to the SPIE Field Guides—a series of publica-tions written directly for the practicing engineer or scien-tist. Many textbooks and professional reference books coveroptical principles and techniques in depth. The aim of theSPIE Field Guides is to distill this information, providingreaders with a handy desk or briefcase reference that pro-vides basic, essential information about optical principles,techniques, or phenomena, including definitions and de-scriptions, key equations, illustrations, application exam-ples, design considerations, and additional resources. A sig-nificant effort will be made to provide a consistent notationand style between volumes in the series.

Each SPIE Field Guide addresses a major field of opticalscience and technology. The concept of these Field Guides isa format-intensive presentation based on figures and equa-tions supplemented by concise explanations. In most cases,this modular approach places a single topic on a page, andprovides full coverage of that topic on that page. Highlights,insights, and rules of thumb are displayed in sidebars tothe main text. The appendices at the end of each FieldGuide provide additional information such as related mate-rial outside the main scope of the volume, key mathemati-cal relationships, and alternative methods. While completein their coverage, the concise presentation may not be ap-propriate for those new to the field.

The SPIE Field Guides are intended to be living docu-ments. The modular page-based presentation format allowsthem to be easily updated and expanded. We are interestedin your suggestions for new Field Guide topics as well aswhat material should be added to an individual volume tomake these Field Guides more useful to you. Please contactus at [email protected].

John E. Greivenkamp, Series EditorOptical Sciences Center

The University of Arizona

Page 5: Field Guide to Infrared Systems

The Field Guide Series

Keep information at your fingertips with all of the titles inthe Field Guide Series:

Field Guide to Geometrical Optics, John E. Greivenkamp(FG01)

Field Guide to Atmospheric Optics, Larry C. Andrews(FG02)

Field Guide to Adaptive Optics, Robert K. Tyson and Ben-jamin W. Frazier (FG03)

Field Guide to Visual and Ophthalmic Optics, Jim Schwieg-erling (FG04)

Field Guide to Polarization, Edward Collett (FG05)

Field Guide to Optical Lithography, Chris A. Mack (FG06)

Field Guide to Optical Thin Films, Ronald R. Willey (FG07)

Field Guide to Spectroscopy, David W. Ball (FG08)

Field Guide to Infrared Systems, Arnold Daniels (FG09)

Field Guide to Interferometric Optical Testing, Eric P.Goodwin and James C. Wyant (FG10)

Page 6: Field Guide to Infrared Systems

Field Guide to Infrared Systems

Field Guide to Infrared Systems is written to clarify andsummarize the theoretical principles of infrared technol-ogy. It is intended as a reference work for the practicing en-gineer and/or scientist who requires effective practical in-formation to design, build, and/or test infrared equipmentin a wide variety of applications.

This book combines numerous engineering disciplines nec-essary for the development of an infrared system. It de-scribes the basic elements involving image formation andimage quality, radiometry and flux transfer, and explainsthe figures of merit involving detector performance. It con-siders the development of search infrared systems, andspecifies the main descriptors used to characterize thermalimaging systems. Furthermore, this guide clarifies, identi-fies, and evaluates the engineering tradeoffs in the designof an infrared system.

I would like to acknowledge and express my gratitude tomy professor and mentor Dr. Glenn Boreman for his guid-ance, experience, and friendship. The knowledge that hepassed on to me during my graduate studies at CREOL ul-timately contributed to the creation of this book. Thanksare extended to Merry Schnell for her hard work and dedi-cation on this project. I voice a special note of gratitude tomy kids Becky and Alex for their forbearance, and to mywife Rosa for her love and support.

Lastly, I would particularly like to thank you, the reader,for selecting this book and taking the time to explore thetopics related to this motivating and exciting field. I trulyhope that you will find the contents of this book interestingand informative.

This Field Guide is dedicated to the memory of my fatherand brothers.

Arnold Daniels

Page 7: Field Guide to Infrared Systems
Page 8: Field Guide to Infrared Systems

Table of Contents

Glossary x

Introduction 1Electromagnetic Spectrum 1Infrared Concepts 2

Optics 3Imaging Concepts 3Magnification Factors 4Thick Lenses 5Stop and Pupils 6F-number and Numerical Aperture 7Field-of-View 8Combination of Lenses 9Afocal Systems and Refractive Telescopes 10Cold-Stop Efficiency and Field Stop 11Image Quality 12Image Anomalies in Infrared Systems 14Infrared Materials 15Material Dispersion 19Atmospheric Transmittance 21

Radiometry and Sources 22Solid Angle 22Radiometry 23Radiometric Terms 24Flux Transfer 26Flux Transfer for Image-Forming Systems 27Source Configurations 28Blackbody Radiators 30Planck’s Radiation Law 31Stefan-Boltzmann and Wien’s DisplacementLaws 33Rayleigh-Jeans and Wien’s Radiation Laws 34Exitance Contrast 35Emissivity 36Kirchhoff ’s Law 37Emissivity of Various Common Materials 38Radiometric Measure of Temperature 39Collimators 41

vii

Page 9: Field Guide to Infrared Systems

Table of Contents

Performance Parameters for Optical Detectors 42Infrared Detectors 42Primary Sources of Detector Noise 43Noise Power Spectral Density 44White Noise 45Noise-Equivalent Bandwidth 46Shot Noise 48Signal-to-Noise Ratio: Detector and BLIP Limits 49Generation-Recombination Noise 50Johnson Noise 511/f Noise and Temperature Noise 52Detector Responsivity 53Spectral Responsivity 55Blackbody Responsivity 56Noise Equivalent Power 57Specific or Normalized Detectivity 58Photovoltaic Detectors or Photodiodes 59Sources of Noise in PV Detectors 60Expressions for D∗

PV,BLIP, D∗∗PV,BLIP, and D∗

PV,JOLI 61Photoconductive Detectors 62Sources of Noise in PC Detectors 63Pyroelectric Detectors 64Bolometers 66Bolometers: Immersion Optics 68Thermoelectic Detectors 69

Infrared Systems 70Raster Scan Format: Single-Detector 70Multiple-Detector Scan Formats: Serial SceneDissection 72Multiple-Detector Scan Formats: Parallel SceneDissection 73Staring Systems 74Search Systems and Range Equation 75Noise Equivalent Irradiance 78Performance Specification: Thermal-ImagingSystems 79MTF Definitions 80

viii

Page 10: Field Guide to Infrared Systems

Table of Contents

Optics MTF: Calculations 83Electronics MTF: Calculations 85MTF Measurement Setup and Sampling Effects 86MTF Measurement Techniques: PSF and LSF 87MTF Measurement Techniques: ESF and CTF 88MTF Measurement Techniques: NoiselikeTargets 90MTF Measurement Techniques: Interferometry 92Noise Equivalent Temperature Difference 93NETD Measurement Technique 94Minimum Resolvable Temperature Difference 95MRTD: Calculation 96MRTD Measurement Technique 97MRTD Measurement: Automatic Test 98Johnson Criteria 99Infrared Applications 101

AppendixEquation Summary 103

Notes 112

Bibliography 113

Index 116

ix

Page 11: Field Guide to Infrared Systems

Glossary

A AreaAd Detector areaAenp Area of an entrance-pupilAexp Area of an exit-pupilAfootprint Footprint areaAimg Area of an imageAlens Lens areaAobj Area of an objectAopt Area of an optical componentAs Source areaB 3-db bandwidthc Speed of light in vacuumCd Detector capacitanceCTF Contrast transfer functionddiff Diameter of a diffraction-limited spotD∗ Normalized detectivity of a detectorD∗

BLIP D-star under BLIP conditionsD∗∗ Angle-normalized detectivityDenp Diameter of an entrance-pupilDexp Diameter of an exit-pupilDimg Image diameterDin Input diameterDlens Lens diameterDout Output diameterDobj Object diameterDopt Optics diametere Energy-based unit subscriptEbkg Background irradianceEimg Image irradianceEsource Source irradianceESF Edge spread functionE Energy of a photonfeff Effective focal lengthf Focal lengthb.f .l Back focal lengthf .f .l Front focal lengthf (x,y) Object functionFB Back focal pointFF Front focal point

x

Page 12: Field Guide to Infrared Systems

Glossary (cont’d)

F(ξ,η) Object spectrumf0 Center frequency of an electrical filterFOV Full-angle field-of-viewFOVhalf-angle Half-angle field-of-viewF/# F-numberg(x,y) Image functionG(ξ,η) Image spectrumG Gain of a photoconductive detectorh(x,y) Impulse responseH(ξ,η) Transfer functionh Planck’s constantH Heat capacityHIFOV Horizontal instantaneous field-of-viewHFOV Horizontal field-of-viewhimg Image heighthobj Object heighti Electrical currenti Mean currentiavg Average electrical currentibkg Background rms currentidark Dark currentij rms Johnson noise currenti1/f rms 1/f -noise currentiG/R Generation-recombination noise rms cur-

rentinoise Noise currentioc Open circuit currentipa Preamplifier noise rms currentirms rms currentisc Short circuit currentishot Shot noise rms currentisig Signal currentJ Current densityk Boltzmann’s constantK(ξf ) Spatial-frequency dependant MRTD pro-

portionality factorK Thermal conductanceL RadianceLSF Line spread function

xi

Page 13: Field Guide to Infrared Systems

Glossary (cont’d)

Lbkg Background radianceLλ Spectral radianceM ExitanceMmeas Measured exitanceMobj Exitance of an objectMλ Spectral exitanceMRTD Minimum resolvable temperature differ-

enceMTF Modulation transfer functionMTFd Detector MTFM MagnificationMang Angular magnificationn Refractive indexnd Number of detectorsne Number of photogenerated electronsnlines Number of linesNEI Noise-equivalent irradianceNEP Noise-equivalent powerNE�f Noise-equivalent bandwidthOTF Optical transfer functionPavg Average powerp Object distancePSD Power spectral densityPSF Point spread functionq Image distanceR ResistanceRd Detector resistanceReq Equivalent resistanceRin Input resistanceRL Load resistanceRout Output resistanceSNR Signal-to-noise ratioSR Strehl-intensity ratioR ResponsivityRi Current responsivityRv Voltage responsivityR(λ) Spectral responsivityR(T) Blackbody responsivityt Time

xii

Page 14: Field Guide to Infrared Systems

Glossary (cont’d)

T TemperatureTB Brightness temperatureTbkg Background temperatureTC Color temperatureTd Detector temperatureTload Load temperatureTrad Radiation temperatureTsource Source temperatureTtarget Target temperatureVIFOV Vertical instantaneous field-of-viewVFOV Vertical field-of-viewv Mean voltagevin Input voltagevj Johnson noise rms voltagevn rms noise voltagevoc Open-circuit voltagevout Output voltagevsc Short-circuit voltagevs Shot-noise rms voltagevscan Scan velocityvsig Signal voltageV Abbe numberW W proportionality factorα Coefficient of absorptionβ Blur angle caused by diffractionε Emissivity�f Electronic frequency bandwidth�t Time interval�T Temperature difference�λ Wavelength intervalθ Angle variableθmax Maximum angle subtenseη Quantum efficiencyηscan Scan efficiencyλ Wavelengthλcut Cutoff wavelengthλmax Maximum wavelengthλmax-cont Maximum contrast wavelengthλpeak Peak wavelength

xiii

Page 15: Field Guide to Infrared Systems

Glossary (cont’d)

λo Fixed wavelengthν Optical frequencyσ2 Varianceσ Standard deviationσe Stefan-Boltzmann constant in energy unitsσp Stefan-Boltzmann constant in photon unitsρ Reflectanceτ Transmittanceτatm Atmospheric transmittanceτdwell Dwell timeτext External transmittanceτint Internal transmittanceτframe Frame timeτline Line timeτopt Optical transmittanceφ Fluxφλ Spectral fluxφabs Absorbed fluxφbkg Background fluxφd Detector fluxφimg Flux incident on an imageφinc Incident fluxφobj Flux radiated by an objectφref Reflected fluxφsig Signal fluxφtrans Transmitted fluxξ Spatial frequency in x-directionξcutoff Spatial cutoff frequencyη Spatial frequency in y-direction� Solid angle�d Detector solid angle�s Source solid angle�bkg Background solid angle�exp Exit pupil solid angle�enp Entrance pupil solid angle�img Image solid angle�lens Lens solid angle�obj Object solid angle

xiv

Page 16: Field Guide to Infrared Systems

Introduction 1

Electromagnetic Spectrum

The electromagnetic spectrum is the distribution ofelectromagnetic radiation according to energy, fre-quency, or wavelength. The electro-magnetic radiation canbe described as a stream of photons, which are particlestraveling in a wavelike pattern, moving at the speed oflight.Type of Radiation Frequency Range Wavelength Range

Gamma rays <3 × 1020 <1 fm

X rays 3 × 1017–3 × 1020 1 fm–1 nm

Ultraviolet 7.5 × 1014–3 × 1017 1 nm–400 nm

Visible 4 × 1014–7.5 × 1014 0.4 μm–0.75 μm

Near-infrared 1014–7.5 × 1014 0.75 μm–3.0 μm

Midwave infrared 5 × 1013–1014 3.0 μm–6 μm

Long wave infrared 2 × 1013–5 × 1013 6.0 μm–15 μm

Extreme infrared 3 × 1011–2 × 1013 15 μm–1 mm

Micro and radio waves <3 × 1011 >1 mm

Frequencies in the visible and infrared spectral bands aremeasured in the millions of megahertz, commonly referredto as wavelengths rather than frequencies. Wavelength canbe measured interferometrically with great accuracy and itis related to the optical frequency by the universal equation

c = λν,

where λ is the wavelength, ν is the optical frequency, and cis the speed of light in free space (3 × 108 m/sec).

The difference between the categories of electromagneticradiation is the amount of energy found in their photons.The energy of a photon is inversely proportional to thewavelength, and is given by

E = hν = hcλ

,

where h is the Planck constant (6.62 × 10−34 J·sec).

Radio waves have photons with very low energies whilegamma-rays are the most energetic of all. The electromag-netic spectrum is classified based on the source, detector,and materials technologies employed in each of the spec-tral regions.

Page 17: Field Guide to Infrared Systems

2 Infrared System Design

Infrared Concepts

Infrared-imaging systems are often used to form imagesof targets under nighttime conditions. The target is seenbecause of self-radiation rather than the reflected radia-tion from the sun. Self-radiation is a physical property of allobjects that are at temperatures above absolute zero (i.e.,0 K = −273.15◦C).

In order to make this radiation visible, the infrared systemdepends on the interaction of several subsystems.

The self-radiation signature is determined by the temper-ature and the surface characteristics of the target. Gasesin the atmosphere limit the frequencies that this radiationis transmitted. The configuration of the optical system de-fines the field-of-view (FOV), the flux collection effi-ciency, and the image quality. These parameters, alongwith the detector interface, impact the radiometric accu-racy and resolution of the resulting image. The detector isa transducer that converts the optical energy into an elec-trical signal, and electronics amplify this signal to usefullevels.

For typical terrestrial and airborne targets Planck’s equa-tion dictates that, within the range of temperatures of300 K to 1000 K, emission of radiation occurs primarilyin the infrared spectrum. However, the background is self-luminous as well, causing terrestrial targets to competewith background clusters of similar temperature. Infraredimages have much lower contrast than corresponding vi-sual images, which have orders of magnitude higher re-flectance and emittance differences.

Page 18: Field Guide to Infrared Systems

Optics 3

Imaging Concepts

An object is a collection of independent source points, eachemitting light rays into all forward directions. Because ofSnell’s law, rays that diverge from each object point in-tersect at corresponding image-plane points. The image isbuilt up on a point-by-point basis. The power at any imagelocation is proportional to the strength of the correspondingobject point, causing a geometrical distribution of power.

The symmetry line that contains the centers of curvatureof all optical surfaces is called the optical axis. Three ray-trace rules are used to find the image position with respectto the object:

1. rays entering parallel to the optical axis exit through theback focal point FB;

2. rays entering the lens through the front focal point FF

exit parallel to the optical axis; and3. rays entering through the center (chief rays) of the lens

do not change direction.

To determine the image-plane location and size, a thinlens and small-angle or paraxial approximation is used,which linearizesthe ray trace equa-tions that de-termine the raypaths throughthe optical sys-tem (i.e., sinθ ≈tanθ ≈ θ).

Gaussian lens equation:1f

= 1p

+ 1q

Newtonian lens equation: xobjximg = f 2

A thin lens has a thickness that is con-sidered negligible in comparison with itsfocal length.

Page 19: Field Guide to Infrared Systems

4 Infrared System Design

Magnification Factors

As the object gets farther away, the image distance getscloser to f (f → ∞, q → f ); and as the object is placed closerto the front focus of the lens, the image gets farther away(p → f , q → ∞).The lateral or transverse magnification of an opticalsystem is given by

M= −qp

= himg

hobj.

By using the Gaussian lens equation, it can be verifiedthat the minimum distance between a real object and itscorresponding image is 4f (i.e., p = q = 2f , in which caseM= −1).

When an off-axis source is located at an infinite distancefrom the lens; an angle θ [rad] exits between the directionof the collimated rays and the optical axis. The ray focusesat a distance θf away from the optical axis.Squaring the lateral magnification, the area or longitudi-nal magnification is obtained by

M2 = Aimg

Aobj=

(−q

p

)2

,

which is used extensively in radiometric calculations.

Page 20: Field Guide to Infrared Systems

Optics 5

Thick Lenses

When the thickness of a lens cannot be considered negli-gible, the lens is treated as a thick lens. FF and FB, arethe front and back focal points and when these focal

points are mea-sured from thelens vertices theydefine the frontfocal length (f.f.l)and the back fo-cal length (b.f.l)of the optical ele-

ment. A diverging ray from FF emerges parallel to theoptical axis, while a parallel incident ray is brought toFB. In each case, the incident and emerged rays are ex-tended to the point of intersection between the surfaces.Transverse planes through these intersections are termedthe primary and secondary principal planes andcan either lie inside or outside the lens. Points wherethese planes intersect the op-tical axis are known as thefirst and second principalpoints Po and Pi. Extendingincoming and outgoing chiefrays until they cross the op-tical axis locate the nodalpoints No and Ni. These sixpoints; two focal, two principaland two nodal, are named the cardinal points of the opti-cal system.The effective focal lengths feff,o and feff,i are measuredfrom the foci to their respective principal points, and areidentical if the medium on each side has the same refrac-tive index:

1feff

= (n − 1)

[1

R1− 1

R2+ (n − 1)t

n1

R1R2

].

A rule of thumb for ordinary glass lenses immersed inair is that the separation between the principal pointsroughly equal one third of the lens thickness (t).

Page 21: Field Guide to Infrared Systems

6 Infrared System Design

Stop and Pupils

Aperture stop (AS): the physical opening that limits theangle over which the lens accepts rays from the axial objectpoint.Entrance pupil (Denp): the image of the AS as seen fromthe axial point on the object plane through the optical ele-ments preceding the stop. If there are no elements betweenthe object and the AS, the latter operates as the entrancepupil.

Exit pupil (Dexp): the image of the AS as seen from the ax-ial point on the image plane through those optical elementsthat follow the stop. If there are no elements between theAS and the image, the former serves as the exit pupil.Axial ray: the ray that starts at the axial object point andends on the axial image point.Marginal ray: a special axial ray that starts at the axialobject point, goes through the edge of the entrance pupil,and ends on the axial image point. Marginal ray is used todefine the F/# and the numerical aperture.Chief ray: a ray that starts at the edge of the object, passesthrough the center of the entrance pupil, and defines theheight of the image.Telecentric stop: an aperture stop that is located at a fo-cal point of the optical system. It is used to reduce the mag-nification error in the size of the projected image for a smalldeparture from best focus.Telecentric system: a system where the entrance or exitpupils located at infinity.For any point in the object, the amount of radiation ac-cepted by and emitted from the optical system is determineby the sizes and locations of the pupils. The location of theAS is determined by that stop or image of a stop that sub-tends the smallest angle as seen from the object axial point.An analogous procedure can be carried out from the imageplane.

Page 22: Field Guide to Infrared Systems

Optics 7

F-number and Numerical Aperture

The F-number (F/#) is the parameter used to describe theratio between the feff of an optical system and the diam-eter of the entrance pupil. It describes the image-spacedcone for an object at infinity:

F/# ≡ feff

Denp.

Although the F/# also exists in image space as q/Denp forfinite conjugate systems, the numerical aperture (NA) isusually the parameter used in these cases.

The refractive index n in air is approximately 1 (n = 1), thenumerical aperture is the axial cone of light in terms of themarginal ray angle α and is defined as NA ≡ sinα

The NA and the F/# are related as follows:

NA ≡ sin(

tan−1 12F/#

)or F/# = 1

2 tan(sin−1 NA)

Assuming paraxial approximation,sinα ≈ α yielding

F/# ≈ 12NA

.

Example: If an F/3 system has its aperture made largeror smaller by 50% in diameter, what are the new F/#s?

If D ↑⇒ F/# ↓; F/#new = feff/1.5D = 2/3F/# = 2, resultingin faster optics.

If D ↓⇒ F/# ↑; F/#new = feff/0.5D = 2F/# = 6, resulting inslower optics.

Page 23: Field Guide to Infrared Systems

8 Infrared System Design

Field-of-View

The field-of-view (FOV) is the angular coverage of an op-tical system. It can be defined either in full or half angles.Using the half-angle principle

FOVhalf-angle = θ1/2 =∣∣∣∣tan−1 hobj

p

∣∣∣∣ =∣∣∣∣tan−1 himg

q

∣∣∣∣.The element that limits the size of the object to be imaged iscalled the field stop, which determines the system’s FOV.In an infrared camera, it is the edge of the detector arrayitself that bounds the image plane and serves as the fieldstop.

For an object at infinity, the full-angle FOV is determinedby the ratio between the detector size and the system’s focallength:

FOV = θ = df.

The detector has a footprint; the image of the detector pro-jected onto the object plane. It defines the area of the ob-ject that contributes flux onto the detector. Given the focallength and the size of the detector, the resolution elementat the object plane can be determined.

A smaller FOV is attained by increasing the focal length,causing an increment in the magnification; a shorter focallength widens the FOV but decreases the magnification.

The F/#and FOVare inverse-ly propor-tional, andare affectedby bothflux transfer and optical aberrations. There is a trade-off between the amount of light that reaches the detector,and the image quality. A system with a small F/# andlarge FOV has high flux-transfer efficiency, but the imagequality worsens. A large F/# and small FOV restricts thesystem’s flux, but the quality of the image is improved.

Page 24: Field Guide to Infrared Systems

Optics 9

Combination of Lenses

Consider two lenses separated by a distance d.

The feff of this optical system is given by the expression

1feff

= 1f1

+ 1f2

− df1f2

,

where f1 and f2 are the focal lengths of the objective andenlarging lenses, respectively. The back focal length isthe distance from the last surface of the enlarging lens tothe focal plane of the system, and is given by

b.f .l = f2(d − f1)

d − (f1 + f2).

When the two lenses are placed in contact (i.e., d → 0), thecombination acts as a single lens yielding

1feff

= 1f1

+ 1f2

.

A special configuration of the two-lens combination systemis the so called “relay lens pair.” In this case, a source isplaced at the front focal point of the optical system; theobjective lens projects this source to infinity which is thenimaged by the enlarging lens.

M= himg

hobj= − f2

f1

The separation of the lenses affect the location of the princi-pal planes and thereby the effective focal length of the sys-tem. Furthermore, as the lenses move apart, the size thedetector lens must be increased in size to avoid vignetting.

Page 25: Field Guide to Infrared Systems

10 Infrared System Design

Afocal Systems and Refractive Telescopes

Afocal systems do not have focal lengths. Telescopes areafocal systems with their object and image located at infin-ity. Their primary function is to enlarge the apparent sizeof a distant object. There are three main types of refractivetelescopes as defined below.

Astronomical (or Keplerian) telescope: comprised oftwo convergent lenses spaced by the sum of their focallengths. The objective is usually an achromatic doubletforming a real inverted and reverted image at its focalpoint; the eye lens then reimages the object at infinity,where it may be erected by the use of an auxiliary lens.The aperture stop and the entrance pupil are located at theobjective to minimize its size and cost.

Galilean telescope: comprised of a positive objective anda negative eye lens, where spacing is the difference be-tween the absolute values of the focal lengths since f2 isnegative. There is no real internal image, and a reticle or

crosshair cannot be intro-duced into the optical sys-tem. The final image is erect.The aperture stop is usuallythe pupil of the viewer’s eye,which is also the exit pupil.

Terrestrial (or erecting) telescope: an astronomicaltelescope with an erecting system inserted between the eyelens and objective so that the final image is erected.

The angular magnification of these afocal systems isgiven by the ratio between the angle subtended by the im-age and then angle subtended by the object:

Mangular = θoutput

θinput=

∣∣∣∣ f2

f1

∣∣∣∣.

Page 26: Field Guide to Infrared Systems

Optics 11

Cold-Stop Efficiency and Field Stop

To reduce thermal noise, infrared photon detectors arecooled to cryogenic temperatures. The detector is housedin a vacuum bottle called a Dewar. An aperture stop isadjacent to the detector plane, which prevents stray ra-diation from reaching the detector. This cold shield islocated inside the evacuated Dewar, and limits the angleover which the detector receives radiation.

The cold-stop efficiency is the percentage of the totalscene source power reaching the detector. The perfect coldstop is defined as one that limits the reception of back-ground radiation to the cone established by the F/# (i.e.,100% cold-stop efficiency). This is achieved when the coldshield is located at the exit pupil of the infrared optical sys-tem.

The FOV of an optical system may be increased without in-creasing the diameter of the detector lens by placing a fieldlens at the internal image of the system. This lens redirectsthe ray bundles back toward the optical axis, which wouldotherwise miss the detector. The insertion of this lens hasno effect on the system’s magnification. This arrangementis good for flux-collection systems (i.e., search systems), butnot for imaging systems since the object is not imaged ontothe detector, but rather into the field lens.

If the field lens is moved to the detector plane, it be-comes an immersion lens, which increases the numeri-cal aperture by a factor of the index of refraction of thelens material, without modi-fying the characteristics of thesystem. This configuration al-lows the object to be imagedonto the detector array.

Page 27: Field Guide to Infrared Systems

12 Infrared System Design

Image Quality

The assumption thus far has been that all points in objectspace are mapped to points in image space. However, moredetailed information such as the size of the image and itsenergy distribution is required to properly design an opticalsystem. Due to effects of diffraction and optical aberra-tions, point sources in the object are seen in the image asblur spots, producing a blurred image.

Diffraction is a consequenceof the wave nature of radiantenergy. It is a physical lim-itation of the optical systemover which there is no con-trol. On the other hand, opti-cal aberrations are image de-fects that arise from the de-

viation of the paraxial approximation; and therefore, theycan be controlled through proper design.

Even in the absence of optical aberrations, diffraction phe-nomena still cause a point to be imaged as a blur circle.Such an optical system is said to be diffraction limited,and it represents the best in optical performance.

The diffraction pattern of a point source appears as a brightcentral disk surrounded by several alternative bright anddark rings. The central disk is called the Airy disk and con-tains 84% of the total flux.

The linear diameter of the diffraction-limited blur spot isgiven by

ddiff = 2.44λF/#.

Page 28: Field Guide to Infrared Systems

Optics 13

Image Quality (cont’d)

The effects of diffraction may also be expressed in angularterms. The full-angular blur is the diameter of the diffrac-tion spot divided by feff, yielding

β = 2.44λ

D.

It can also be definedas the angular subtenseof the minimum reso-lution feature in objectspace viewed from theentrance pupil.

The size of the blur spot depends on the F/# and the spec-tral band in which the imaging system operates.

Low F/# systems have the smallest diffraction-limited spotsizes, thus the best potential performance. However, theaberration effects generally become worse as the F/# de-creases. Therefore, low F/# systems are harder to correctto diffraction-limited performance. Alternatively, longerwavelength systems have larger diffraction spots, and areeasier to correct to a diffraction-limited level of perfor-mance.

For example, an F/1 diffraction-limited system operating at10 μm forms a spot diameter of 24.4 μm. The same systemoperating at F/7 would form a spot diameter of 170.8 μm.The same F/1 system operating in the visible spectrum at0.5 μm, forms a spot diameter of 1.22 μm, while the F/7system produces a diffraction spot of 8.5 μm in diameter.

Optical aberrations depend on the refractive and disper-sive effects of the optical materials and on the geometricalarrangement of the optical surfaces. Inherent aberrationsin the performance of the system with spherical surfacesinclude

• spherical• astigmatism• field curvature• distortion

• coma• axial and lateral

chromatic aberration

Page 29: Field Guide to Infrared Systems

14 Infrared System Design

Image Anomalies in Infrared Systems

There are three common image anomalies associated withIR systems.

Shading: the gradual fall off in the scene radiance towardthe edges of the detector, caused by fall of (cos4 θ) depen-dence of the effective exitance of a uniform source. It is con-trolled by optical design techniques that keep the angle ofthe chief ray small in image space.

Scan noise: the amount of self-radiation reaching thedetector due to room-temperature internal housing, and

optical elements asa function of scanposition. Scan noisecan also be causedby vignetting in animage-side scanner(e.g., rotating poly-gon). There is a fi-

nal displacement d from the center of the facet in the di-rection normal to the initial position. Then the interactivescanning moves in and out, and the exit beam wanders leftand right, causing vignetting.

The narcissus effect: the result of a cold reflectionof the detector array into itself; it appears as a darkspot at the center of the scan. It is controlled by us-ing the appropriate antireflective coatings on the opticalelements, and by optical design techniquesthat ensure the cold-reflected image is out offocus at the detector plane. Its magnitude isoften expressed as a multiple of the system’snoise level.

Page 30: Field Guide to Infrared Systems

Optics 15

Infrared Materials

The choice of infrared materials is dictated by the applica-tion. The most important material parameters to considerare its transmission range, and its material disper-sion. Other properties to be considered are the absorptioncoefficient, reflection loss, rupture modulus, thermalexpansion, thermal conductivity, and water-erosionresistance.

The refractive index n is defined as

n = cυ

,

where c is the speed of light (3 × 1010 cm/sec) in free spaceand υ is the speed of light in the medium.

Whenever a ray crosses a boundary between two mate-rials of different refractive indexes, there is some powertransmitted, reflected, and absorbed (i.e., conservation ofenergy):

φinc = φtrans + φref + φabs; φ ≡ power [Watts].

The direction of thetransmitted light isgiven by Snell’s law;and the direction ofthe reflected beamis determined by thelaw of reflection.

The distribution of power from a plane-parallel plate atnormal incidence is determined by the Fresnel equations:

ρ =(

n2 − n1

n1 + n2

)2

and τ = 4n1n2

(n1 + n2)2 , where

ρ = φref

φincand τ = φtrans

φinc.

The absorption is described as the attenuation coefficientα[1/cm], and it takes power out of the beam radiation, rais-ing the temperature of the material:

φ(z) = φince−αz,

where z is the propagation distance within the material.

Page 31: Field Guide to Infrared Systems

16 Infrared System Design

Infrared Materials (cont’d)

The internal transmittance is defined as the transmit-tance through a distance in the medium, excluding theFresnel reflection loss at the boundaries:

τinternal = φ(z)φincident

= e−αz.

Conversely, the external transmittance is the transmit-tance through a distance in the medium that includes theFresnel loss at the boundaries:

τexternal = τ2e−αz = τ2τinternal.

When examining material specifications from a vendor, itis necessary to take into account the distinction betweeninternal and external transmittances.

Mirrors are characterized by their surface reflectivity andpolish, as well as by the properties of the blanks on whichthese polishes are established. Plots of reflectance versuswavelength for commonly used metallic coating materialsare shown below. Aluminum is widely used because it of-fers an average reflectance of 96% throughout the visi-ble, near-infrared, and near-ultraviolet regions of the spec-trum. Alternatively, silver exhibits higher reflectance (98%)through most of the visible and IR spectrum, but it oxidizesfaster reducing its reflectance and causing light to scatter.Bare gold, on the other hand, combines good tarnish resis-tance with consistently high reflectance (99%) through thenear, middle, and far infrared regions. All of the metals ex-hibit higher reflectance at long wavelengths.

Data from Wolfe & Zissis, The Infrared Handbook (1990).

Page 32: Field Guide to Infrared Systems

Optics 17

Infrared Materials (cont’d)

Metallic reflective coatings are delicate and require careduring cleaning. Therefore, metallic coatings that are over-coated with a single, hard dielectric layer of half-wave opti-cal thickness improve abrasion and tarnish resistance. De-pending on the dielectric used, such coatings are referred toas durable, protected, or hard coated. The reflectanceof metallic coatings can also be increased over the desiredspectral range or for different angles of incidence by over-coating it with a quarter-wave stack of multilayer dielectricfilm, said to be enhanced.

The most versatile materials commonly used for systemsoperating in the MWIR and LWIR spectral regions are sap-phire (Al2O3), zinc sulfide (ZnS), zinc selenide (ZnSe), sili-con (Si), and germanium (Ge).

Sapphire is an extremely hard material, which is usefulfor visible, NIR, and IR applications through 5 μm. It ispractical for high-temperature and pressure applications.

ZnS comes in two grades. The regular grade transmits inthe 1–12 μm with reasonable hardness and good strength.The other grade is water-clear called CLEARTRAN andtransmits in the 0.4–12 μm spectral band.

Page 33: Field Guide to Infrared Systems

18 Infrared System Design

Infrared Materials (cont’d)

ZnSe has low absorbance and is an excellent material usedin many laser and imaging systems. It transmits well from0.6–18 μm.

Si is a good substrate for high-power lasers due to its highthermal conductivity. It is useful in the 3–5 μm, and 48–100μm (astronomical applications).

Ge has good thermal conductivity, excellent surface hard-ness and good strength. It is use for IR instruments oper-ating in the 2–14 μm spectral band.

Other popular infrared materials are calcium fluoride(CaF2), magnesium fluoride (MgF2), barium fluoride(BAF2), gallium arsenide (GaAs), thallium bromoiodide(KRS-5), and cesium iodide (CsI).

Page 34: Field Guide to Infrared Systems

Optics 19

Material Dispersion

Dispersion is the variation in the index of refractionwith wavelength [dn/dλ] and it is an important materialproperty when considering large-spectral-bandwidth opti-cal systems. The dispersion is greater at short wavelengthsand decreases at longer wavelengths; however, for mostof the materials, it increases again when approaching thelong infrared wavelength absorption band.

For historical reasons, the dispersion is often quoted as aunitless ratio called the reciprocal relative dispersionor Abbe number defined by:

V = nmean − 1�n

= nmean − 1nfinal − ninitial

,

where ninitial and nfinal are the initial and final index of re-fraction values of the spectral band of interest, and nmeanis the mean center value. �n is basically the measuredvalue of the dispersion and nmean − 1 specifies the refrac-tive power of the material. The smaller the Abbe numberthe larger the dispersion. For example, the V number for agermanium lens in the 8- to 12-μm spectral band is

V = n(10 μm) − 1n(12 μm) − n(8 μm)

= 4.0038 − 14.0053 − 4.0023

= 1001.27.

Another useful definition is the relative partial disper-sion given by

P = nmean − ninitial

nfinal − ninitial,

which is a measure of the rate of change of the slope of theindex as a function of wavelength (i.e., d2n/dλ2).

Page 35: Field Guide to Infrared Systems

20 Infrared System Design

Material Dispersion (cont’d)

1Figure adapted from Wolfe & Zissis, The Infrared Handbook.

Page 36: Field Guide to Infrared Systems

Optics 21

Atmospheric Transmittance

The absorption and emission of radiation in the at-mosphere are critical parameters that must be consideredwhen developing infrared systems. Because of the sun’sradiation, aerosols and molecular scattering are particu-larly important background sources in the visible spec-trum. However, in the IR this effect is minimal since thewavelengths are much longer. According to the Rayleighprinciple, the scattered flux density is inversely propor-tional to the fourth power of the driving wavelength.

The atmosphere is comprised primarily of CO2, H2O, andO3. High absorption occurs in different parts of the infraredspectrum due to molecular vibrations of these elements.For example, the NIR is greatly affected by water vaporas well as the short and long wave sides of the LWIR largewindow. The MWIR has two dips due to carbon dioxide andozone.

There are three main atmospheric windows in the infrared:NIR, 3 to 5 μm, and 8 to 14 μm. System technologies haveevolved independently, optimizing the operation in each ofthese spectral bands.

The atmosphere is problematic for high-energy laser sys-tems. Small temperature variations cause random changesin wind velocity, or turbulence motion. These changes intemperature give rise to small changes in the index of re-fraction of air, acting like little lenses that cause intensityvariations. These fluctuations distort the laser beam wave-front, producing unwanted effects such as beam wander,beam spreading, and scintillation.

Page 37: Field Guide to Infrared Systems

22 Infrared System Design

Solid Angle

The solid angle “�” in 3D space measures a range ofpointing directions from a point to a surface. It is defined,assuming paraxial approximation, as the element of the

area of a sphere divided by thesquare of the radius of the sphere.It is dimensionless and measuredin square radians or steradians[ster].

For example, the area of a fullsphere is given by 4πr2; therefore,its solid angle is 4π. A hemispherecontains half as many radians as asphere (i.e., � = 2π).

When large angles are involved, amore exact definition is required.Using spherical coordinates thesolid-angular subtense can be ex-pressed as a function of a flat discor planar angle θ as

d� = dar2 sinθdθdϕ.

Integrating over the acceptancecone

� =∫ 2π

0dϕ

∫ θmax

0sinθdθ = 2π(1 − cosθmax) = 4π sin2 θmax

2

is obtained.

If the disc is tilted at a selected angle γ, its differential solidangular subtense is decreased by a factor of cosγ.

Page 38: Field Guide to Infrared Systems

Radiometry and Sources 23

Radiometry

Radiometry is a quantitative understanding of flux trans-fer through an optical system. Given the radiation from athermal source transmitted through the optics of an in-frared system, the fundamental question is how much ofthe source power is collected by the infrared sensor. Radio-metric calculations predict the system’s signal-to-noiseratio (SNR).

Understanding the radiometric terms and their units arethe key to performing radiometric calculations.

Symbol Radiometric Term UnitsQe Radiant energy Jouleφe Radiant power or flux WattIe Radiant intensity Watt/sterMe Radiant exitance Watt/cm2

Ee Irradiance Watt/cm2

Le Radiance Watt/cm2·sterQp Photon energy Photon or quantumφp Photon flux Photon/secIp Photon intensity Photon/sec·sterMp Photon exitance Photon/cm2·secEp Photon irradiance Photon/cm2·secLp Photon radiance Photon/sec·cm2·ster

Subscript e = energy derived units; subscript p = photon rate quantities.

Conversion between the two sets of units is done with theformula that determines the amount of energy carried perphoton:

E = hcλ

; ⇒ φe[Watt] = φp[Photon/sec] · E,

where h is Planck’s constant, c is the speed of light, and λ isthe wavelength. Photon derived units are useful when con-sidering an infrared sensor that responds directly to photon(e.g., photovoltaic) events, rather to thermal energy (e.g.,microbolometer).

The energy carried per photon is inversely proportional tothe wavelength; therefore, a short-wavelength photon car-ries more energy than a long-wavelength photon. It canalso be interpreted as how many photons per second ittakes to produce 1 W.

Page 39: Field Guide to Infrared Systems

24 Infrared System Design

Radiometric Terms

Flux: a quantity propagated or spatially distributed ac-cording to the laws of geometrical optics.

Both irradiance and exitance have units of spatial powerdensity; however, the terms have different interpretations.The exitance is the power per unit area leaving a surface,thus describing a self-luminous source. It is defined as theratio of the differential flux to the source area from which itis radiating as the area is reduced to apoint:

M = ∂φ

∂As,

where the total flux radiated into a hemi-sphere is given by:

φ =∫

M · ∂As.

Equivalently, irradiance is a measure of the total incidentflux per unit area on a passive receiver surface (e.g., sen-sor). It is defined as the ratio of power to the area uponwhich it is incident as the area is reduced to a specific posi-tion.

E = ∂φ

∂Ad⇒ φ =

∫E · ∂Ad.

The intensity is the radiant power per unit solid angle asthe solid angle is reduced in value to a specific direction,

and is used to character-ize the amount of fluxradiated from a pointsource that is collectedby the entrance pupil.

The intensity varies as afunction of the view an-

gle, and can be written as I = I0 cosθ, where I0 is the inten-sity in direction normal to the surface.

Both the flux and the irradiance decrease as “one-over-r-square falloff”:

φ = I · � = IAenp

r2 ; E = φ

Aenp= I

r2 .

Page 40: Field Guide to Infrared Systems

Radiometry and Sources 25

Radiometric terms (cont’d)

Radiance is the most general term to describe source flux,because it includes both positional and directional charac-terization. It is used to characterize extended sources; thatis, one that has appreciable area compared to the square ofthe viewing distance. The visual equivalent of the radianceis the term “brightness.”

The radiance is defined for a particular ray direction, as theradiant power per unit projected source area (perpendicu-lar to the ray) per unit solid angle:

L ≡ ∂2φ

∂As cosθs∂�d; ⇒ ∂2φ = L∂As cosθs∂�d,

which is the fundamental equation of radiation transfer.The term ∂2φ is the power radiated into the cone, and it isincremental with respect to both the area of the source andthe solid angle of the receiver.

For small but final source area and detector solid anglequantities

φ ∼= LAs cosθs�d.

A Lambertian radiator emits radiance that is indepen-dent of angle (i.e., the radiance is isotropic and is equallyuniform in each direction within the hemisphere). Thetransfer equation can be applied to a Lambertian emitterto obtain the relationship between radiance and exitance:

M = ∂φ

∂As=

∫�d

L cosθs∂�d =∫ 2π

0dϕ

∫ π/2

0L cosθs sinθs dθs = πL.

Similarly, the intensity can be obtained by integrating thefundamental equation with respect to the source area:

I = ∂φ

∂�d=

∫L cosθsdAs = LAs cosθs.

Page 41: Field Guide to Infrared Systems

26 Infrared System Design

Flux Transfer

The fundamental equation of radiation transfer states thatan element of power is given by the radiance times theproduct of two projected areas, divided by the square of thedistance between them.Assuming normal angles ofincidence (i.e., θs = θd = 0),

φ = LAs�d = LAsAd

r2 = L�sAd.

Two equivalent expressions in terms of area solid angleproduct are obtained by grouping either the source or thedetector area with r2.This relationship, defined as the so called A� product, iscompletely symmetrical, and can be used to calculate thepower in either direction of the net power flow. It is alsoknown as the optical invariant or throughput.In the case where the detector is tilted (θd �= 0), the fluxdecreases by the co-sine of the angle:

φ = LAsAd cosθd

r2 .

Consider the casefor θs �= 0 but θd = 0, the flux decreases proportional tocos3 θs:

φ = LAs cos3 θsAd

r2 .

The last case is when both θd = θs ≡ θ �= 0, which leadsto the most realistic situation; the so-called “cosine to thefourth law.”

φ = LAs cos4 θAd

r2

Page 42: Field Guide to Infrared Systems

Radiometry and Sources 27

Flux Transfer for Image-Forming Systems

For purposes of simplification, the cosine projections aredropped (i.e., paraxial approximation), in which case theflux transfer is simply described in terms of the A� prod-uct.

Recalling the area or longitudinal magnification equation,the total flux collected by the optical system may be calcu-lated by either one of the following flux-transfer equations:

φ = LsAs�lens from source = LsAsAlens

p2 ,

φ = LsAi�lens from image = LsAiAlens

q2 ,

φ = LsAlens�s = LsAlensAs

p2 ,

φ = LsAlens�i = LsAlensAi

q2 .

In this case, the lens aperture acts as the intermediate re-ceiver. In more complex optical systems, the entrance pupilis the intermediate receiver.

Page 43: Field Guide to Infrared Systems

28 Infrared System Design

Source Configurations

The source may be idealized as either a point source ora uniform extended-area source. The point source is asource that cannot be resolved by the optical system. Itis smaller than the projection of the resolution spot atthe object plane. An extended source, on the other hand,

is the one that has anappreciable area com-pared to the detec-tor’s footprint.

The entity of radi-ance is the most ap-propriate for the de-

scription of radiant flux from an extended-area source;while intensity is the quantity that must be used in or-der to characterize the radiation originated from a pointsource.

The power collected by the lens is reformatted to form animage of the source. The image irradiance for a distantextended-area source can be calculated directly using alarge Lambertian disc, which can be the actual extendedsource or an intermediate source such as a lens.

The extended source isfilling the FOV; and thesolid angle is bounded bythe marginal rays andlimited by the aperturestop.

From the Lambertian disc,the following relationshipsare derived:

rdisc = z tanθ ⇒ drdisc = d(z tanθ) = z sec2 θdθ = zcos2 θ

dθ,

As = πr2disc ⇒ dAs = 2πrdiscdrdisc = 2πz2 tanθ

cos2 θ.

Page 44: Field Guide to Infrared Systems

Radiometry and Sources 29

Source Configurations (cont’d)

The transfer flux is obtained by integrating the fundamen-tal equation of radiation transfer over the source and de-tector areas:

φd = L∫∫

AsAd

∂As cosθs∂�d = L∫∫

AsAd

∂As cosθs∂Ad cosθd

r2 ,

φd = L∫∫θAd

2πz2 tanθdθ

cos2 θcos2 θ

(cos2 θ

z2

)∂Ad,

φd = 2πLAd

∫ θmax

0sinθ cosθdθ = πLAd sin2

θmax.

The irradiance on a detector from an extended-area sourceis then obtained by dividing the flux transferred by the areaof the detector:

Eextended source = φ

Ad= πL sin2

θ = πL4F2

/# + 1.

For an extended-area source the image irradiance dependsonly on the source radiance and the F/# of the optical sys-tem.-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -In the case of a point source, the collected power is definedby the intensity times the solid angle of the optics:

φ = I · �opt = I · Aopt

p2 .

and it is defined as:

If the optics is diffraction-limited, 84% of the flux transferis concentrated into the image spot; therefore, the averageirradiance of a point source at the detector plane is givenby

Epoint source = φ

Ad× 0.84 = 0.84I�opt

π

4d2

diff

= 0.84I�optπ

4[2.44λ(F/#)]2

.

Page 45: Field Guide to Infrared Systems

30 Infrared System Design

Blackbody Radiators

Empirically, it is found that solid bodies heated to incandes-cent temperatures emit radiation primarily in the infraredportion of the spectrum, and these incandescent sourcesemit their radiation in a continuous wavelength ratherthan at discrete spectral lines.

To describe a radiation that emits a finite total power dis-tributed continuously over wavelength, spectral radio-metric quantities with units per micron of wavelengthinterval are used. The weighted spectral quantities are de-noted with a subscript λ (e.g., Me,λ is the spectral exitancein W/cm2 μm). In-band nonspectral quantities are obtainedby integrating the spectral terms over any spectral inter-val, for example,

L =∫ λ2

λ1

Lλdλ, M =∫ λ2

λ1

Mλdλ, φ =∫ λ2

λ1

φλdλ.

The term blackbody (BB) is used to describe a perfectradiator (i.e., an idealized thermal source). It absorbs allthe radiant energy; and as a consequence, it is the per-fect emitter. BBs have maximum spectral exitance pos-sible for a body at a specified temperature, either overa particular spectral region, or integrated over all wave-lengths. They are a convenient baseline for radiometric cal-culations, since any thermal source at a specified tempera-ture is constrained to emit less radiation than a blackbodysource at the same temperature.

BB radiation is also called cavity radiation. Virtually anyheated cavity with a small aperture produces high-qualityradiation. These blackbody simulators are used primar-ily as laboratory calibration standards. The most popular

blackbody cavities arecylinders and cones, thelatter being the mostcommon. The apertureof the cavity definesthe area of the source.

Some commercial blackbodies have an aperture wheel thatallows the choice of this area.

Page 46: Field Guide to Infrared Systems

Radiometry and Sources 31

Planck’s Radiation Law

The radiation characteristics of ideal blackbody surfacesare completely specified if the temperature is known.Blackbody radiation is specified by Planck’s equation, andit defines the spectral exitance as a function of absolutetemperature and wavelength:

Me,λ = 2πhc2

λ5

1exp(hc/λkT) − 1

= c1

λ5

1exp(c2/λT) − 1

[W/cm2 · μm],

where h is Planck’s constant; 6.62 × 10−34 Joule·sec; k isBoltzmann’s constant; 1.3806 × 10−23 Joule/K; T is the ab-solute temperature in degrees Kelvin [K]; λ is the wave-length in centimeters [cm]; c is the speed of light in vac-uum; 2.998 × 1010 cm/sec; c1 is the first radiation constant;2πhc2 = 3.7415 × 104 W/cm2 · μ4; c2 is the second radiationconstant; hc/k = 1.4382 cm · K.Planck’s equation generates spectral exitance curves thatare quite useful for engineering calculations.

Planck’s curves illustrating the following characteristics:• The shape of the blackbody curves does not change for

any given temperature.• The temperature is inversely proportional to wave-

length; the peak exitance shifts toward shorter wave-lengths as the temperature increases.

Page 47: Field Guide to Infrared Systems

32 Infrared System Design

Planck’s Radiation Law (cont’d)

• The individual curves never cross one another; the exi-tance increases rapidly for increased temperatures at allwavelengths.

The Planck radiation formula models system design andanalysis problems. For example, the radiant emittance of ablackbody at temperatures from 400 to 900 K includes thetemperature of the hot metal tailpipes of jet aircrafts.

Page 48: Field Guide to Infrared Systems

Radiometry and Sources 33

Stefan-Boltzmann and Wien’s Displacement Laws

Stefan-Boltzmann law relates the total blackbody exi-tance at all wavelengths to source temperature, and it isobtained by integrating out the wavelength dependence ofthe Planck’s radiation law:

Me(T) =∫ ∞

0Me,λ(λ,T)dλ =

∫ ∞

0

2πhc2

λ5

exp(hc/λkT) − 1= 2π5k4T4

15c2h3

Me(T) = σeT4,

where σe is the Stefan-Boltzmann constant and has avalue of 5.7 × 10−12 W/cm2 K4. Stefan-Boltzmann law onlyholds for exitance integrated from zero to infinity at theinterval of wavelength.

The total exitance at all wavelengths multiplied by thesource area results in the total power radiated, which in-creases as the fourth power of the absolute source temper-ature in Kelvin. For example, at room temperature (300 K),a perfect blackbody of an area equal to 1 cm2 emits a totalpower of 4.6 × 10−2 W. Doubling the temperature to 600 K,the total power increases 16 fold to 0.74 W.-- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -The derivative of Planck’s equation with respect to wave-length yields the Wien’s displacement law, which givesthe wavelength for which the peak of the spectral-exitancefunction occurs as a function of temperature:

∂Me,λ(λ,T)

∂λ= 0 ⇒ λmaxT = 2898 [μm·K].

Thus the wavelength at which the maximum spectral radi-ant exitance varies is inversely proportional to the absolutetemperature. The plot of λmax as a function of temperatureis a hyperbola (see plots on page 31).

For example, a blackbody source at 300 K has its maximumexitance at 9.7 μm; however, if the temperature of thissource is changed to 1000 K, the maximum value at whichthe peak exitance occurs is at 2.9 μm. The sun is a black-body source at approximately 6000 K. Applying Wien’s lawit is found that its maximum wavelength occurs at 0.5 μm,which corresponds to the peak response of the human eye.

Page 49: Field Guide to Infrared Systems

34 Infrared System Design

Rayleigh-Jeans and Wien’s Radiation Laws

Two well-known approximations to Planck’s radiation laware the Rayleigh-Jeans and Wien’s radiation laws. The for-mer occurs at long wavelengths:

hc/λkT 1 ⇒ Me,λ ∼= 2πckTλ4 ,

while the latter is valid only at short wavelengths:

hc/λkT � 1 ⇒ Me,λ ∼= 2πhc2

λ5 exp(

− hcλkT

).

As the temperature increases, the peak wavelength de-creases (Wien’s law), and the area under the Planck curveincreases much faster (Stefan-Boltzmann law).

Thermal Equations in Photon-Derived Units

In photon derived units Planck’s radiation, Boltzmannand Wien’s displacement, Wien’s radiation, and Rayleigh-Jeans formulae are given by:

Planck’s radiation equation:

Mp,λ = 2πcλ4

1exp(hc/λkT) − 1

[Photon/sec · cm2 · μm]

Stefan-Boltzmann law:

Mp(T) = σpT3

where σp has a value of 1.52 × 10−11 photons/sec·cm2·K3.

Wien’s displacement law:

λmaxT = 3662 [μm · K]

Rayleigh-Jeans radiation law:

hc/λkT 1 ⇒ Mp,λ∼= 2πkT

λ3hWien’s radiation laws:

hc/λkT � 1 ⇒ Mp,λ∼= 2πc

λ4 exp(

− hcλkT

)

Page 50: Field Guide to Infrared Systems

Radiometry and Sources 35

Exitance Contrast

In terrestrial infrared systems, the target and backgroundare often of similar temperatures, in which case the targethas very low contrast. The proper choice of spectral pass-band wavelength �λ becomes essential to maximize thevisibility of the target. This pass band should straddle thewavelength for which the exitance changes the most as afunction of temperature.

This consideration of exitance contrast involves the follow-ing second order partial derivative, where the system op-erating within a finite passband is most sensitive to smallchanges in temperature:

Finds λpeak contrast∂

∂λ

[∂Mλ(λ,T)

∂T

]= 0

Finds best sensitivity(steepest slope)

Carrying out these derivatives yields a constraint on wave-length and temperature in a similar fashion to the Wien’sdisplacement law, yielding:

λpeak-contrastT = 2410 [μm·K].

For a given blackbody temperature, the maximum exitancecontrast occurs at shorter wavelengths than the wave-length of the peak exitance. For example, at 300 K the peakexitance occurs at 9.7 μm while the peak exitance contrastoccurs at 8 μm.

Page 51: Field Guide to Infrared Systems

36 Infrared System Design

Emissivity

Emissivity is the ratio of the spectral exitance of a realsource at a given temperature, to that of a blackbody at thesame temperature:

ε(λ,T) = Mλ,source(λ,T)

Mλ,BB(λ,T).

The spectral exitance of any real source at a given temper-ature is bounded by the spectral exitance of a perfect radia-tor at the same kinetic temperature; hence ε is constrainedbetween zero and unity.

The emissivity characterizes how closely the radiationspectrum of a real heated body corresponds to that of ablackbody. It is a spectrally varying quantity, and can beeither referred to as spectral (measured over a finite pass-band) or total (measured over all wavelengths) quantity.This emission efficiency parameter depends also on the sur-face temperature. However, emissivity data for most mate-rials are typically a constant and are seldom given as afunction of λ and T unless it is an especially well charac-terized material.

There are three types of sources that can be differenti-ated depending on how their spectral emissivity varies:(1) a blackbody or perfect radiator, for which its emissiv-ity is equal to unity for all wavelengths; (2) a graybodywhen its emissivity is a constant fraction of what the cor-responding blackbody would radiate at the same tempera-ture (ε < 1). It is independent of wavelength and it has thesame spectral shape as a blackbody; (3) a selective radiatorwhere ε is an explicit function of λ.

Page 52: Field Guide to Infrared Systems

Radiometry and Sources 37

Kirchhoff’s Law

If a solid body of a certain mass is located within a colderisothermal cavity, according to the second law of thermody-namics, there will be a net flow of heat from the object tothe walls of the hollow space. Once the body reaches ther-mal equilibrium with its surroundings, the first law of ther-modynamics or conservation of energy requires that:

φincident = φabsorbed + φtransmitted + φreflected,

where φincident is the incident flux on the solid body. Divid-ing both sides of the equation by φincident yields

1 = α + τ + ρ,

where α is the absorbance, τ the transmittance, and ρ isthe reflectance. For an opaque body (i.e., τ = 0) either theincident radiation is absorbed or reflected, yielding

α = 1 − ρ,

which indicates that surfaces with low reflectance are highemitters.

If the body absorbs only a portion of the radiation that is in-cident on it, then it emits less radiation in order to remainin thermal equilibrium; that is

E = εM,

which leads to Kirchhoff’s law, which states that the ab-sorbance of a surface is identical to the emissivity of thatsurface. Kirchhoff ’s law also holds for spectral quantities;it is a function of temperature and can vary with the direc-tion of measurement. This law is sometimes verbalized as“good absorbers are good emitters.”

Integrated absorbance = α(λ,T) ≡ ε(λ,T) = Integrated emittance

For polished metals, the emissivity is low; however, it in-creases with temperature, and may increase substantiallywith the formation of an oxide layer on the object surface.A thin-film of oil and surface roughness can increase theemissivity by an order of magnitude, compared to a pol-ished metal surface. The emissivity of nonmetallic surfacesis typically greater than 0.8 at room temperature, and itdecreases as the temperature increases.

Page 53: Field Guide to Infrared Systems

38 Infrared System Design

Emissivity of Various Common Materials

Metals and Other Oxides EmissivityAluminum: Polished sheet 0.05Sheet as received 0.09Anodized sheet, chromatic-acid process 0.55Vacuum deposited 0.04Brass: highly polished 0.03Rubbed with 80-grit emery 0.20Oxidized 0.61Copper: Highly polished 0.02Heavily oxidized 0.78Gold: Highly polished 0.21Iron: Cast polished 0.21Cast, oxidized 0.64Sheet, heavily rusted 0.69Nickel: Electroplated, polished 0.05Electroplated, not polished 0.11Oxidized 0.37Silver: polished 0.03Stainless Steel: type 18-8 buffed 0.16Type 18-8, oxidized 0.85Steel: Polished 0.07Oxidized 0.79Tin: Commercial tin-plated sheet iron 0.07

Nonmetallic Materials EmissivityBrick: Red common 0.93Carbon: Candle soot 0.95Graphite, filed surface 0.98Concrete 0.92Glass: Polished plate 0.94Lacquer: White 0.92Matte black 0.97Oil, lubricant (Thin film of Nickel base):Nickel base alone 0.05Oil film: 1, 2, 5 × 10-3 in 0.27, 0.46, 0.72Thick coating 0.82Paint, oil: Average of 16 colors 0.94Paper: White bond 0.93Plaster: Rough coat 0.91Sand 0.90Human skin 0.98Soil: Dry 0.92Saturated with water 0.95Water: Distilled 0.96Ice, smooth 0.96Frost crystals 0.98Snow 0.90Wood: Planed oak 0.9

Data from Wolfe & Zissis, The Infrared Handbook (1990).

Page 54: Field Guide to Infrared Systems

Radiometry and Sources 39

Radiometric Measure of Temperature

There are many applications in the infrared where the ac-tual kinetic temperature of a distant object must be known.However, infrared systems can only measure the apparentspectral exitance emitted by targets and/or backgrounds,which are a function of both temperature and emissivity.The temperature of the infrared source can be measuredif emissivity of the viewing source within the appropriatespectral region is known. Discrepancies in emissivity val-ues produce built-in errors in the calculation of this kinetictemperature.

There are three main types of temperature measurementsas discussed below.

Radiation temperature (Trad): a calculation based onStefan-Boltzmann law because it is estimated over thewhole spectrum:

Mmeas = σT4rad,

where Mmeas is the measured exitance. If the source is agraybody with a known emissivity, then Ttrue can be calcu-lated from Trad:

Mmeas = σT4rad = εσT4

true ⇒ Ttrue = Trad4√

ε.

Due to its strong dependence on emissivity, Trad cannot becorrected to find Ttrue if ε is an unknown. Similarly, Trad isaffected by the attenuation in the optical system, especiallyin harsh environments where the optical elements might bedirty.

Brightness temperature (Tb): a measurement performedusing Planck’s radiation law because it is estimated overa single wavelength λ0, or in a narrow spectral region(�λ) around a fix wavelength λ0. For a blackbody sourceTb = Ttrue; therefore, the Planck equation can be solvedfor Tb:

Tb = c2

λ0 ln{

1 + c1

λ50Mλ(λ0,Tb)

} ,

where c1 = 3.7415 × 104 W/cm2μ4 and c2 is = 1.4382 μm·K.

Page 55: Field Guide to Infrared Systems

40 Infrared System Design

Radiometric Measure of Temperature (cont’d)

If the source is a graybody with a known emissivity, thenTtrue can be calculated from Tb as follows:

c1

λ50

1exp(c2/λ0Tb) − 1

= c1

λ50

ε

exp(c2/λ0Ttrue) − 1,

yielding

Ttrue = c2

λ0 ln{1 + ε[exp(c2/λ0Tb) − 1]} .

The exitance levels measured for Tb are lower than the lev-els for Trad because of the narrowband filtering involved.The best sensitivity for this measurement is obtained bychoosing λ0 near the wavelength of the peak exitance con-trast, where Mλ changes most with temperature.The brightness temperature is a convenient measurement,but is not robust to incomplete knowledge of the emissivityor to the attenuation in the optical system.Color temperature (Tc): is the temperature of a black-body that best matches the spectral composition of the tar-get source. This spectral composition is defined as the ratioof the measured spectral exitance at two different wave-lengths given by

Mλ(λ1)

Mλ(λ2)= λ5

2 exp(c2/λ2Tc) − 1λ5

1 exp(c2/λ1Tc) − 1.

Under these circumstances, the emissivity cancels out be-cause of the ratio, and Tc = Ttrue for both black and gray-bodies.

This method isstrongly affectedwhen the targetsource is a selectiveradiator. In this case,the measurement ofthe spectral exitancemust be performed atmany wavelengths.

Page 56: Field Guide to Infrared Systems

Radiometry and Sources 41

Collimators

A collimator is an optical assembly that places a target atinfinity, and produces a controllable irradiance that is in-dependent of distance. It is widely used for testing the sen-sitivity and the resolution of infrared systems.

Assume a Lambertian source that is radiating at all wave-lengths, the total flux emitted can be written using Stefan-Boltzmann’s law:

φ = LsAs�coll = σT4

π

AsAcoll

f 21

.

The source exitance and/or the irradiance falling on the de-tector surface can be obtained by dividing the radiant fluxover the area of the collimator, yielding

M = σT4

π

As

f 21

= σT4

π

Ad

f 22

= E.

An extended source placed at the focal plane of a collima-tor, can only be seen in a well-defined region. The maxi-mum distance at which the infrared imaging system can beplaced from the collimator is

dmax = fcoll

tDcoll.

If the imaging system is placed at a distance greater thandmax, the target’s outer edges are clipped and only the

central portion of thetarget is seen.The distance betweenthe collimating andimaging system opti-cal components is

dlenses = fcoll

t(Dcoll − DIS).

Page 57: Field Guide to Infrared Systems

42 Infrared System Design

Infrared Detectors

Infrared detectors are transducers that sample the incidentradiation and produce an electrical signal proportional tothe total flux incident on the detector surface. There aretwo main classes of infrared detectors: thermal and pho-ton detectors. Both types respond to absorbed photons,but use different response mechanisms, which lead to vari-ations in speed, spectral responsivity, and sensitivity. Ther-mal detectors depend on the changes in the electrical ormechanical properties of the sensing materials (e.g., resis-tance, capacitance, voltage, mechanical dis-placement) thatresult from temperature changes caused by the heating ef-fect of the incident radiation. The change in these electri-cal properties with input flux level is measured by an ex-ternal electrical circuit. The thermal effects do not dependon the photonic nature of the incident infrared radiation;they have no inherent long-wavelength cutoff. Their sensi-tivity limitation is due to thermal flux and/or the spectralproperties of the protective window in front of them. Theresponse rate of a thermal detector is slow because of thetime required for the device to heat up after the energy hasbeen absorbed. Examples of different classes of thermal de-tectors are bolometer, pyroelectric, thermopile, Golaycells, and superconductors.

The two basic types of semiconductor-photon detectors arephotoconductors and photovoltaics, or photodiodes. Thephotonic effects in these devices result from direct conver-sion of incident photons into conducting electrons withinthe material. An absorbed photon excites an electron fromthe nonconducting state into a conducting state instanta-neously, causing a change in the electrical properties of thesemiconductor material that can be measured by an exter-nal circuit. Photon detectors are very fast; however, theirresponse speed is generally limited by the RC product ofthe readout circuit.

Detector performance is described in terms of responsiv-ity, noise-equivalent power, or detectivity. These fig-ures of merit enable a quantitative prediction and evalu-ation of the system as well as to compare relative perfor-mance between different detector types.

Page 58: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 43

Primary Sources of Detector Noise

Noise is a random fluctuation in electrical output froma detector, and must be minimized to increase the per-formance sensitivity of an infrared system. Sources ofoptical-detector noise can be classified as either externalor internal. However, the focus here is on the internalgenerated detector noises that include shot, generation-recombination, one over frequency (1/f ), and temperaturefluctuation, which are a function of the detector area, band-width, and temperature.It is possible to determine the limits of detector perfor-mance set by the statistical nature of the radiation to whichit responds. Such limits set the lower level of sensitivity,and can be ascertained by the fluctuations in the signal orbackground radiation falling on the detector.Random noise is expressed in terms of an electrical vari-able such as a voltage, current, or power. If the voltage isdesignated as a random-noise waveform vn(t) and a certainprobability-density function is assigned to it, its statisticsare found as a function of the following statistical descrip-tors:Mean: vn = 1

T

∫ T

0vn(t)dt [volts],

Variance or mean-square:

v2n = (vn(t) − vn)2 = 1

T

∫ T

0[vn(t) − vn ]2dt [volts2],

Standard deviation:

vrms = �vn =√

1T

∫ T

0[vn(t) − vn ]2dt [volts],

where T is the time interval. The standard deviation rep-resents the rms noise of the random variable.Linear addition of independent intrinsic noise sources iscarried out in power (variance) not in voltage noise (stan-dard deviation); the total rms noise of random quantitiesare added in quadrature:

�vrms,total = √�vrms,1 + �vrms,2 + · · · + �vrms,n.

Assuming three sources ofnoise are present; Johnson,shot, and 1/f noise:

v2rms,total = v2

j, + v2s, + v2

1/f

Page 59: Field Guide to Infrared Systems

44 Infrared System Design

Noise Power Spectral Density

Noise can also be described in the frequency domain. Thepower spectral density (PSD), or the mean-square fluc-tuation per unity frequency range, provides a measurementof frequency distribution of the mean-square value of thedata (i.e., distribution of power).For random processes, frequency can be introduced throughthe autocorrelation function. The time average autocorre-lation function of a voltage waveform may be defined as

cn(τ) = limT→∞

1T

∫ T/2

−T/2vn(t)vn(t + τ)dt,

where the autocorrelation is the measure of how fast thewaveform changes in time. The PSD of a wide-sense sta-tionary random process is defined as the Fourier transformof the autocorrelation function (Wiener-Kinchine theorem)

PSD = N(f ) =F{cn(τ)} =∫ ∞

−∞cn(τ)e−j2πfτdτ.

The inverse relation is

cn(τ) =F−1{N(f )} =∫ ∞

−∞N(f )ej2πfτdf .

Using the central ordinate theorem yields

cn(0) =∫ ∞

−∞N(f )df =

∫ ∞

−∞v2

n(t)dt = v2n(t).

The average power of the random voltage waveform is ob-tained by integrating the PSD over its entire range of defi-nition.Uncorrelated noise such as white noise implies that its au-tocorrelation function is a delta function. The PSD of suchrandom processes is a constant over the entire frequencyrange, but in practice, the PSD is constant over a wide butfinal range (i.e., band-pass limited).

Page 60: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 45

White Noise

Detector noise curves have different frequency contents.A typical PSD plot for a sensor system is shown.

• 1/f noise can be partly excluded by ac coupling (i.e., cutoff the systemdc response by using a high pass filter with a cutoff between 1 Hz to1 KHz.

• Shot noise and generation-recombination (G/R) noise have a roll-off fre-quency in the midband range (≈ 20 KHz–1 MHz), proportional to theinverse of carrier-lifetime.

• Johnson noise and amplifier noise are usually flat to high frequenciespast 1/2τcarrier.

• In a photon sensor, the charge carriers transmit both signal and noise,therefore, the upper cut-off frequency of the electronics bandwidthshould not be higher than 1/2τcarrier; wider bandwidth includes morenoise, but not more signal.

Most system-noise calculations are done in the whitenoise region, where the PSD is flat over sufficient broad-band relative to the signal band of interest. Shot and G/Rnoises are white up to f ≈ 1/2τcarrier; beyond that, they roll-off. For white noise, the noise power is directly proportionalto the detector bandwidth; consequently, the rms noise volt-age is directly proportional to the square root of the detec-tor bandwidth.

Detectors have temporal im-pulse response with a widthof τ. This response time or in-tegration time is related to itsfrequency bandwidth as �f =1/2τ.If the input noise is white,then the noise-equivalent

bandwidth of the filter determines how much noise power ispassed through the system. As the integration time short-ens, the noise-equivalent bandwidth widens and the sys-tem becomes noisier; while using a measurement circuitwith longer response time yields a system with less noise.

Page 61: Field Guide to Infrared Systems

46 Infrared System Design

Noise-Equivalent Bandwidth

The noise-equivalent bandwidth (NE�f or simply �f )of an ideal electronic amplifier has a constant power-gaindistribution between its lower and upper frequencies, andzero elsewhere. It can be represented as a rectangular func-tion in the frequency domain. However, real electronic fre-quency responses do not have such ideal rectangular char-acteristics; so it is necessary to find an equivalent band-width that would provide the same amount of noise power.

The noise-equivalent bandwidth is defined as:

NE�f ≡ 1G2(f0)

∫ ∞

−∞|G(f )|2df ,

where G(f ) is the power gain as a function of frequency, andG(f0) is the maximum value of the power gain. The most

common defin-ition of band-width is the fre-quency intervalwithin whichthe power gainexceeds one-half of its max-imum value(i.e., 3-dB band-

width, which it is usually denoted by the symbol B).

The above definition of noise-equivalent bandwidth as-sumes white noise; that is, the power spectrum of noiseis flat. However, if the noise power spectrum exhibitsstrong frequency dependence, the noise-equivalent band-width should be calculated from

NE�f = 1

G2(f0)v20

∫ ∞

−∞v2

n(f )G2(f )df ,

where v2n(f ) is the mean-square noise voltage per unit band-

width, and v20 is the mean-square noise voltage per unit

bandwidth measured at a frequency high enough that thePSD of the noise is flat.

Page 62: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 47

NE�f (cont’d)

For noise-equivalent bandwidth calculations, two impulseresponse forms are commonly considered: square and ex-ponential. The square impulse response is most com-monly used to relate response time and noise-equivalentbandwidth. The exponential impulse response arises fromcharge-carrier lifetime, or measuring the RC time con-stants in electrical circuits.

A square impulse response with a pulse width τ can be ex-pressed as a rectangular function as

v(t) = v0 rect[(t − t0)/τ].

Applying the Fourier transform, the normalized voltagetransfer function is obtained:

V(f )V0

= e−j2πft0/τ sinc(πfτ).

Substituting into the NE�f equation and solving the inte-gral:

NE�f =∫ ∞

−∞|e−j2πft0/τ sinc(πfτ)|2df = 1

2τ.

The calculation of the noise-equivalent bandwidth froman exponential impulse response is obtained as follows.

The exponential impulse response can be specified as:v(t) = v0 exp(t/τ)step(t),

Fourier transforming:V(f )V0

= 11 + j2πfτ

,

taking the absolute value squared:∣∣∣∣V(f )V0

∣∣∣∣2

= 11 + (2πfτ)2 .

Integrating, it yields the following noise-equivalent band-width:

NE�f =∫ ∞

−∞df

1 + (2πfτ)2 = 14τ

.

Page 63: Field Guide to Infrared Systems

48 Infrared System Design

Shot Noise

The shot noise mechanism is associated with the nonequi-librium conditions in a potential energy barrier of a pho-tovoltaic detector through which a dc current flows. It isa result of the discrete nature of the current carriers andtherefore of the current-carrying process. The dc currentis viewed as the sum total of very many short and smallcurrent pulses, each contributed by the passage of a singleelectron or hole through the junction depletion layer. Thistype of noise is practically white, considering the spectraldensity of a single narrow pulse.

The generation of the carriers is random according to pho-ton arrival times. However, once a carrier is generated, itsrecombination is no longerrandom; it is actually de-termined by transit-timeconsiderations that obeyPoisson statistics, whichstate that the varianceequals the mean.

To determine the expres-sion for the mean-squarecurrent fluctuation at the output of the measuring circuit,the current is measured during a time interval τ:

i = neqτ

,

where n is the number of photo-generated electronswithin τ. The average current can be related to the aver-age number of electrons created by

i = neqτ

⇒ ne = iτq

.

The mean-square fluctuation averaged over many indepen-dent measuring times τ is

i2n,shot = (i − i)2 = q2

τ2 (ne − ne)2 = q2

τ2 ne,

which yields the expression of the shot noise:

i2n,shot = iq

τ= 2qi�f ⇒ in,shot =

√2qi�f .

Page 64: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 49

Signal-to-Noise Ratio: Detector and BLIP Limits

Is signal-to-noise ratio (SNR) predominantly generatedby the signal or the background?

Considering the current generated primarily by signal pho-tons, without any extraneous radiation present, the signalcurrent from the detector is

isig = φp,sigηq,

where η is the quantum ef-ficiency [electrons per pho-ton]. Similarly, the cur-rent generated by the back-ground flux is

ibkg = φp,bkgηq.

The SNR is that of the signal current to the shot noise,namely

SNR = isig

in,shot= φp,sigηq√

2qi�f= φp,sigηq√

2q(φp,sigηq + φp,bkgηq)�f.

Assuming that the dominant noise contribution is the shotnoise generated by the signal-power envelope (i.e., φsig �φbkg)

SNR ∼= φp,sigη√2φp,sigη�f

=√

φp,sigητ,

which states that the SNR increases as the square root ofthe signal-flux, thus improving the sensitivity of the sys-tem.

With a weak signal source detected against a large back-ground, the most common situation in infrared applica-tions, the dominant noise contribution is associated withthe shot noise of the background (i.e., φbkg � φsig). The pho-todetector is said to be background limited, yielding

SNRBLIP ∼= φp,sigη√2φp,bkgη�f

= φp,sig

√ητ

φp,bkg.

SNRBLIP is inversely proportional to the square root of thebackground flux; so reducing the background photon fluxincreases the SNR background-limited infrared pho-todetector (BLIP).

Page 65: Field Guide to Infrared Systems

50 Infrared System Design

Generation-Recombination Noise

Generation-recombination (G/R) noise is caused byfluctuations in the rates of thermal generation and recom-bination of free carriers in semiconductor devices withoutpotential barriers (e.g., photoconductors), thereby givingrise to a fluctuation in the average carrier concentration.Thus, the electrical resistance of the semiconductor mate-rial changes, which can be observed as fluctuating voltageacross the sample when the bias current flows through it.

The transversephotoconductivitygeometry and cir-cuit are shown.

Once a carrier isgenerated, it trav-els under the influ-ence of an appliedelectric field. Thecarrier lasts untilrecombination occurs at a random time consistent with themean carrier lifetime τcarrier. The statistical fluctuation inthe concentration of carriers produces white noise.

The current noise expression for generation-recombinationnoise is given by

in,G/R = 2qG√

ηEpAd�f + gth�f ,

where G is the photoconductive gain and gth is the thermalgeneration of carriers. Since photoconductors are cooledcryogenically, the second term in the above equation canbe neglected. Assuming G equals unity,

in,G/R =√

2qηEpAd�f + 2qηEpAd�f = 2√

qηEpAd�f ,

in,G/R = √2in,shot.

Note that the rms G/R noise is√

2 larger than for shotnoise, since both generation and recombination mecha-nisms are random processes.

Page 66: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 51

Johnson Noise

If the background flux is reduced enough, the noise floor isdetermined by the Johnson noise, also known as Nyquistor thermal noise. The fluctuation is caused by the thermalmotion of charge carriers in resistive materials, includingsemiconductors, and occurs in the absence of electrical biasas a fluctuating rms voltage or current.

Johnson noise is an ideal noise-free resistor of the same re-sistance value, combined either in series with an rms noisevoltage or in parallel with an rms noise current.

When multiplying the Johnson noise in terms of noise-voltage and current spectra:

√4kTR [volt/

√Hz] and

√4kTR

[amp/√

Hz],

the power spectral density of the Johnson noise dependsonly on temperature, and not on resistance:

PSD = 4kT [watt/Hz].

Often the detector is cooled to cryogenic temperatures,while the load resistance is at room temperature. In combi-nation, both parallel resistors are added as usual, but therms noises are added in quadrature, yielding

iJohnson =√

i2d + i2

L =√

4k�f[

Td

Rd+ TL

RL

].

The SNR in the Johnson noise limit is given by

SNRJohnson ∼= φp,sigηq√4kTR �f

.

Johnson noise is independent of the photogenerationprocess, and thus the SNR is directly proportional to thequantum efficiency.

Page 67: Field Guide to Infrared Systems

52 Infrared System Design

1/f Noise and Temperature Noise

1/f noise is found in semiconductors, and becomes worseat low frequencies. It is characterized by a spectrum inwhich the noise power depends approximately inversely onthe frequency. The general expression for 1/f noise currentis given by

in,f =√K iα�f

f β,

where K is a proportionality factor, i is the average dc cur-rent, f is the frequency, �f is the measuring bandwidth,and α is a constant of ∼2, and β ranges from ∼0.5 to 1.5.

Although the cause of 1/f noise is not fully understood, itappears to be associated with the presence of potential bar-riers at the nonohmic contacts and the surface of the semi-conductor.

1/f noise is typically the dominant noise to a few hundredhertz, and is often significant to several kilohertz. It is al-ways present in microbolometers and photoconductors, be-cause there is always a dc-bias current flowing within thedetector material. However, 1/f noise can be eliminatedin photovoltaic detectors operating in open-circuit voltagemode, when no dc-bias current is allowed to flow throughthe diode.

Temperature noise is caused by fluctuations in the tem-perature of the detector due to fluctuations in the rate atwhich heat is transferred from the sensitive element to itssurroundings (i.e., radiative exchange and/or conductancewith heat sink). The spectrum of the mean-square fluctua-tion in temperature is given by

�T2 = 4kKT2

K2 + (2πf )2C2 ,

where k is Boltzmann’s constant, K is the thermal conduc-tance, C is the heat capacity, and T is the temperature.Temperature noise is mostly observed in thermal detectors,and being temperature-noise limited is the utmost perfor-mance level. The power spectrum of the temperature noiseis flat.

Page 68: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 53

Detector Responsivity

Responsivity gives the response magnitude of the de-tector. It provides information on gain, linearity, dynamicrange, and saturation level. Responsivity is a measureof the transfer function between the input signal photonpower or flux and the detector electrical signal output:

R = output signalinput flux

,

where the output can be in volts or amperes, and the inputin watts or photons/sec. Ri is current responsivity andRv is voltage responsivity. A common technique in de-tection is to modulate the radiation to be detected and tomeasure the modulated component of the electrical outputof the detector. This technique provides some discrimina-tion against electrical noise since the signal is containedonly within the Fourier component of the electrical signalat the modulation frequency, whereas electrical noise is of-ten broadband. Furthermore, it avoids baseline drifts thataffect electronic amplifiers due to ac coupling. The outputvoltage would vary from peak-to-valley.

An importantcharacteristicof a detectoris how fast itresponds to apulse of op-tical radia-tion. The volt-age responseto radiationmodulated atfrequency f isdefined as:

Rv(f ) = vsig(f )φsig(f )

,

where φsig(f ) is the rms value of the signal flux containedwithin the harmonic component at frequency f , and vsig(f )is the rms output voltage within this same harmonic com-ponent.

Page 69: Field Guide to Infrared Systems

54 Infrared System Design

Detector Responsivity (cont’d)

In general, the responsivity R(f ) of a detector decreases asthe modulation frequency f increases. Changing the angu-lar speed of the chopper, the responsivity can be obtainedas a function of frequency. A typical responsivity-versus-frequency curve is plotted.

The cutoff frequency fcutoffis defined as the modu-lation frequency at which|Rv(fcutoff)|2 falls to one-halfits maximum value, and isrelated to response time as

fcutoff = 12πτ

.

The response time of a detector is characterized by its re-sponsive time constant, the time that takes for the de-tector output to reach 63% (1 − 1/e) of its final value af-ter a sudden change in the irradiance. For most sensitivedevices, the response to a change in irradiance follows asimple exponential law. As an example, if a delta-functionpulse of radiation δ(t) is incident on the detector, an outputvoltage signal (i.e., the impulse response) of the form

v(t) = voe−t/τ t ≥ 0

is produced, where τ is the time constant of the detector.Transforming this time-dependent equation into the fre-quency domain by using the corresponding Fourier trans-form yields

V(f ) = v0τ

1 + j2πfτ,

which can be extended to responsivity as

Rv(f ) = R0

1 + j2πfτ,

where R0 = v0τ is the dc value of the responsivity. The mod-ulus is written as

|Rv(f )| = R0√1 + (2πfτ)2

.

Page 70: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 55

Spectral Responsivity

Responsivity depends on the wavelength of the incident ra-diation beam, thus the spectral response of a detector canbe specified in terms of its responsivity as a function ofwavelength. Spectral responsivity R(λ, f ) is the outputsignal response to monochromatic radiation incident on thedetector, modulated at frequency f . It determines the am-plifier gain required to bring the detector output up to ac-ceptable levels. To measure the spectral responsivity of adetector, a tunable narrowband source is required.

In energy-derived units, the spectral responsivity for athermal detector is indepen-dent of wavelength (i.e., 1 Wof radiation produces the sametemperature rise at any spec-tral line). Therefore, its spectralresponse is limited by the spec-tral properties of the material ofthe window placed in front of thedetector.

In a photon detector, the idealspectral responsivity is linearlyproportionate to the wavelength:

Rv,e(λ) =Rv,e(λcutoff)λ

λcutoff.

Photons with λ > λcutoff, are notabsorbed by the material of the detector; and therefore notdetected.

Long-wavelength cutoff is the longest wavelength detectedby a sensor made of a material with a certain energy gap,and it is given by

λcutoff = hc/Egap.

A expression to determine the energy gap in electron-voltunits is

Egap[eV] = hcλcutoff

· 1 eV1.6 × 10−19J

= 1.24λcutoff

.

The energy gap of silicon is 1.12 eV; therefore, photons withλ > 1.1 μm are not detected.

Page 71: Field Guide to Infrared Systems

56 Infrared System Design

Blackbody Responsivity

Blackbody responsivity R(T, f ) is interpreted as the out-put produced in response to a watt of input optical radia-tion from a blackbody at temperature T modulated at elec-trical frequency f . Since a BB source is widely availableand rather inexpensive compared to a tunable narrowbandsource, it is more convenient to measure R(T) and calculateits corresponding R(λ).

For a blackbody that produces a spectral flux φλ(λ) the de-tector output voltage is calculated from the following in-tegral:

vout,det =∫ λcutoff

0φλ(λ)Rv(λ)dλ [volt],

which determines the contribution to the detector outputin those regions where the spectral flux and the voltagespectral responsivity overlap.

When measuring blackbody responsivity, the radiant poweron the detector contains all wavelengths of radiation, in-dependent of the spectral response curve of the detector.Using Stefan-Boltzmann’s law and some of the basic radio-metric principles studied previously, the blackbody respon-sivity is

R(T) = vout,det

φe=

∫ λcutoff

0 φλ(λ)Rv(λ)dλ

σT4

πAsource�det

=∫ λcutoff

0 Me,λ(λ)Rv(λ)dλ

σT4 .

Substituting the ideal response of a photon detector:

R(T) =Rv(λcutoff)

λcutoff

∫ λcutoff

0 Me,λ(λ)λdλ

σT4 .

The ratio of R(T) and R (λcutoff) is defined by the W-factor:

W(λcutoff,T) = Rv(λcutoff)

R(T)= σT4

1λcutoff

∫ λcutoff0 Me,λ(λ)λdλ

= σT4

hcλcutoff

∫ λcutoff0 Mp,λ(λ)dλ

.

Two standard blackbody temperatures are used to evaluatedetectors: (1) 500 K for mid-wave and long-wave infraredmeasurements; and (2) 2850 K for visible and NIR mea-surements.

Page 72: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 57

Noise Equivalent Power

Although the responsivity is a useful measurement to pre-dict a signal level for a given irradiance, it gives no indica-tion of the minimum radiant flux that can be detected. Inother words, it does not consider the amount of noise at theoutput of the detector that ultimately determines the SNR.

The ability to detect small amounts of radiant energy is in-hibited by the presence of noise in the detection process.Since noise produces a random fluctuation in the outputof a radiation detector, it can mask the output that is

produced by a weak op-tical signal. Noise thussets limits on the min-imum input spectralflux that can be de-tected under given con-ditions.

One convenient description of this minimum detectable sig-nal is the noise equivalent power (NEP), which is de-fined as the radiant flux necessary to give an output signalequal to the detector noise. In other words, is the radiantpower, φe, incident on the detector that yields an SNR = 1,and is expressed as the rms noise level divided by the re-sponsivity of the detector:

NEP = vn

Rv= vn

vsig/φsig·= φsig

SNR[watt],

where vn denote the rms voltage produced by a radiationdetection system. A smaller NEP implies better sensitivity.

The disadvantage of using the NEP to describe detectorperformance is that the NEP does not allow a direct com-parison of the sensitivity of different detector mechanismsor materials. This is because of its dependence on both thesquare root of the detector area, and the square root of theelectronic bandwidth. A descriptor that circumvents thisproblem is called D∗ (pronounced “dee star”) which normal-ized the inverse of the NEP to a 1-cm2 detector area and1 Hz of noise bandwidth.

Page 73: Field Guide to Infrared Systems

58 Infrared System Design

Specific or Normalized Detectivity

The use of NEP is a situation-specific descriptor useful fordesign purposes but does not allow direct comparison ofthe sensitivity of different detector mechanisms of mate-rials. The specific or normalized detectivity (D∗) is of-ten used to specify detector performance. It normalizes thenoise-equivalent bandwidth and the area of the detector;however, in order to predict the SNR, the sensor area andbandwidth must be chosen for a particular application.

D∗ is independent of detector area and the electronic band-width, because the NEP is also directly proportional to thesquare root of these parameters as well. D∗ is directly pro-portional to the SNR as well as to the responsivity:

D∗ =√

Ad�fNEP

=√

Ad�fφd

SNR =√

Ad�fvn

Rv [cm√

Hz/watt].

Plots of spectral D∗ for photon de-tectors have the same linear de-pendency with λ:

D∗(λ) = D∗peak(λcutoff)

λ

λcutoff.

The same W-factor applies be-tween D∗(λ) peak and D∗(T).

The cutoff value of D∗(λ) is defined as the peak spectralD∗

peak(λcutoff), and corresponds to the largest potential SNR.In addition, any optical radiation incident on the detectorat a wavelength shorter than the cutoff wavelength, λcutoff,has D∗(λ) reduced from the peak D∗

peak(λcutoff) in proportionto the ratio λ/λcutoff.

Unlike the NEP, this descriptor increases with the sensitiv-ity of the detector. Depending on whether the NEP is spec-tral or blackbody, D∗ can also be either spectral or black-body. D∗(λ, f ) is the detector’s SNR when 1 W of monochro-matic radiant flux (modulated at f ) is incident on a 1-cm2

detector area, within a noise equivalent bandwidth of 1 Hz.The blackbody D∗(T, f ) is the signal-to-noise output when1 W of blackbody radiant power (modulated at f ) is incidenton a 1-cm2 detector, within a noise equivalent bandwidth of1 Hz.

Page 74: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 59

Photovoltaic Detectors or Photodiodes

In photovoltaic (PV) detectors, more commonly calledphotodiodes, the optical radiation is absorbed at the PNjunction, which produces an output current or voltage.

The photodiode equation is given by:

i = idiode − iph = i0

[exp

(qvkT

)− 1

]− ηqφp,

where iph is the photo-generated current, i0 is thedark current, q is the chargeof an electron, v is the volt-age, k is the Boltzmann’sconstant, and T is the tem-perature.

The photodiode operating ineach quadrant has differentelectro-optical characteristics. The most common operatingpoints are open circuit, reverse bias, and short circuit.

Open circuit:

Reverse bias:

If C decreases then RC decreases, the junction widens andthe detector response increases.

Short circuit: the voltage is zero across the photodiodeand iph is forced to flow into an electrical short circuit:

i = iph = ηqφp = ηqEpAd.

Page 75: Field Guide to Infrared Systems

60 Infrared System Design

Sources of Noise in PV Detectors

The sources of noise in a PV detector are

1. shot noise due to the dark current;2. shot noise due to the feedback noise of the signal and

background;3. Johnson noise due to detector resistance;4. Johnson noise due to the load resistors;5. 1/f noise associated with the current flow;6. preamplifier circuit noise; and7. preamplifier voltage noise.

The noise expression for the photodiode is then

i2n = 4q2i2

0�f + 2q2ηφp,sig�f + 2q2ηφp,bkg�f +4k�f

(Td

Rd+ Tf

Rf

)+ β0i�f

f+ i2

pa + v2pa(Rf ‖ Rd).

The preamplifier noise can be made negligible by using low-noise transistors or by cooling the preamplifier to cryogenictemperatures. 1/f noise is almost zero if the device is oper-ating in either open or reverse circuit modes. It is assumedthat the dark current is also negligible compared to boththe background and signal currents (i.e., i0 isig + ibkg),and that BLIP conditions are in effect. Under these condi-tions, the rms shot noise current is approximately

in ∼=√

2q2ηφp,bkg�f =√

2q2ηφe,bkgλ

hc�f .

Recalling that the peak signal current generated by a pho-tovoltaic detector is

isignal = φp,sigηq = φe,sigηqλ

hc,

the SNR for a photovoltaic detector is then

SNRPV =φe,sigηq

λ

hc√2q2φe,bkgη

λ

hc�f

.

Setting the SNRPV = 1, the spectral NEPPV,BLIP is obtained:

NEPPV,BLIP(λ) =√

hcλ

2φe,bkg�fη

.

Page 76: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 61

Expressions for D∗PV,BLIP, D∗∗

PV,BLIP, and D∗PV,JOLI

Spectral D∗PV,BLIP for a PV detector is obtained from the de-

finition of D∗ in terms of NEP:

D∗PV,BLIP(λ) =

√Ad�f

NEPBLIP=

√λ

hcη

2Ebkg= λ

hc

√η

2Ebkg.

If the background is not monochromatic, Ebkg needs to beintegrated over the sensor response from 0 to λcutoff:

D∗PV,BLIP(λ) = λcutoff

hc

√√√√√η

2∫ λcutoff

0Ebkg(λ)dλ

= F/#λcutoff

hc

√√√√√ 2η

π

∫ λcutoff

0Lbkg(λ)dλ

,

where Ebkg = πLbkg sin2θ ∼= πLbkg(1/2F/#)

2.D∗

BLIP increases when increasing the F/#, which illumi-nates the detector with a smaller cone of background ra-diation.D∗∗ normalizes the dependence on sinθ, allowing a compar-ison of detectors normalized to hemispherical background:

D∗∗PV,BLIP(λ) = sinθD∗

PV,BLIP(λ).

Johnson-limited noise performance (JOLI) occurs indeep space applications when the shot noise has been de-creased. It becomes negligible compared to the Johnsonnoise:

2q2η(φp,sig + φp,sig)�f 4k�f(

Td

Rd+ Tf

Rf

),

where Td and Rd are the temperature and resistance of thedetector, respectively. The SNR of the PV detector underthis condition becomes:

SNRPV,JOLI =qηφe,sig

λ

hc

4k�f(

Td

Rd+ Tf

Rf

) ∼=qηφe,sig

λ

hc

4k�fTd

Rd

Rf � Rd.

Setting SNRPV,JOLI = 1, converting the noise-equivalentphoton flux to NEPPV,JOLI, and substituting the NEP ex-pression in terms of D∗

D∗PV,JOLI = λqη

2hc

√RdAd

kTd.

Page 77: Field Guide to Infrared Systems

62 Infrared System Design

Photoconductive Detectors

Shot noise occurs in diodes and other devices with apotential-energy barrier for which the generation is arandom process, while recombination is a deterministic

process. In devices suchas photoconductive(PC) detectors with-out junctions or otherpotential barriers, bothgeneration and recom-bination are randomprocesses.

Photoconductors respond to light by changing the resis-tance or conductance of the detector’s material:

Rd ∝ 1φp

⇒ dRd ∝ dφp

φ2p

,

where φp is the photon flux.

Photoconductive detectorshave no junction; therefore,there is no intrinsic field.They cannot operate un-der open circuit conditions,and do not generate volt-age independently (i.e., a must use bias current). In orderto detect the change in resistance, a biasing circuit with anapplied field must be utilized.

The output voltage is given by

vout = Rd

Rd + RLvbias,

where a small change in the detector resistance produces achange in signal voltage, which is directly proportional tothe change in the photon flux incident on the photoconduc-tive detector:

dvout = dRd

(Rd + RL)2 vbias ∝ dφp

(Rd + RL)2φ2p

vbias.

Page 78: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 63

Sources of Noise in PC Detectors

The sources of noise in photoconductors are listed as

1. 1/f noise associated with the current flow;2. generation-recombination (R/G) noise;3. Johnson noise due to detector resistance;4. Johnson noise due to the load resistors;5. preamplifier circuit noise; and6. preamplifier voltage noise.

The noise expression for the photoconductive detector isthen

i2n = 4q2ηEpAdG2�f + 4q2gthG2�f + 4k�f

(Td

Rd+ Tf

Rf

)+

β0i�ff

+ i2pa + v2

pa(Rf ‖ Rd),

where gth is the thermal generation rate of carriers and Gis the photoconductive gain. G is proportional to the num-ber of times an electron can transit the detector electrodesin its lifetime (i.e., excess signal electrons through the PC).It depends on the detector size, material, and doping, andcan vary between 1 < G < 105. If G < 1, the electron doesnot reach the electrode before recombining. Usually PC de-tectors are cooled cryogenically; in which case, the thermalterm in the generation recombination noise is negligible.A photoconductor for BLIP- and Johnson-limited conditionsare defined in the table.

BLIP JOLI

NEPPC,BLIP = 2hcλG

√EbkgAd�f

ηNEPPC,JOLI ≡ ij

Ri,PC=

√4k�fT/Req

λqη

hc G

D∗PC,BLIP = λ

2hc G√

η

EbkgD∗

PC,JOLI = λqη

2hc G√

ReqAdkT

where:T ≡ Td ≈ TL&Req = Rd ‖ RL

For a given photon flux and a photoconductive gain of unity,the generation-recombination noise is larger than the shotnoise by a factor of

√2.

Page 79: Field Guide to Infrared Systems

64 Infrared System Design

Pyroelectric Detectors

A pyroelectric detector is comprised of a slice of ferro-electric material with metal electrodes in opposite faces,

perpendicular to the polar axis.This material possesses an in-herent electrical polarization, themagnitude of which is a strongfunction of temperature. The rate

of change of electric polarization with respect to tempera-ture, dP/dT, is defined as the pyroelectric coefficient at theoperating temperature, p.A change in irradiancecauses a temperature varia-tion, which expands or con-tracts the crystal lattice,changing the polarization ofthe material. This changein polarization (i.e., realign-ment of the electric dipole concentration) appears as acharge on the capacitor formed by the pyroelectric with itstwo electrodes. A voltage is then produced by the chargeon the capacitor. Thus, there is an observable voltage inthe external circuit as long as the detector experiencesa change in irradiance (i.e., it can only be used in an acmode). In order to detect these small charges, low-noisehigh-impedance amplifiers are necessary.The temperature difference �T between the pyroelectricand the heat sink is related to the incident radiation bythe heat balance differential equation:

Hd�T

dt+ K�T = εφe,

where H is the heat capacity, K is the thermal conductance,ε is the emissivity of the surface, and φe is the flux in en-ergy derived units.Since the radiation is modulated at an angular frequencyω, it can be expressed as an exponential function φe =φe,oejωt, in which case the heat balance equation providesthe following solution:

|�T| = εφe,o

K√

1 + ω2τ2th

⇒ τth ≡ HK

= RthCth,

Page 80: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 65

Pyroelectric Detectors (cont’d)

where τth is the thermal time constant, Rth is the thermalresistance, and Cth is the thermal capacitance.The current flowing through the pyroelectric detector isgiven by:

i = Adpd�T

dt,

Thus the current responsivity is defined as:

Ri =∣∣∣∣ iφe,o

∣∣∣∣ = Adpεω

K√

1 + ω2τ2th

= AdRthpεω√1 + ω2τ2

th

The current times the parallel electrical impedance yieldsthe detector voltage:

v = iRd

1 + jωRdCd,

and the voltage responsivity is simply:

Rv =∣∣∣∣ vφe

∣∣∣∣ = AdRdRthpεω√1 + ω2τ2

th

√1 + ω2(RdCd)2

At high frequencies, the voltageresponsivity is inversely propor-tional to frequency; while at lowfrequencies this is modified by theelectric and thermal constants.The dominant noise in pyro-electric detectors is most com-monly Johnson noise, in which:

vjohnson = √4kTRd�f

Then both NEP and D∗ may becalculated by:

NEP = vjohnson

Rv=

√4kT�f

√1 + ω2τ2

th

√1 + ω2(RdCd)2

Ad√

RdRthpεω

D∗ =√

A3dRdRthpεω

√4kT

√1 + ω2τ2

th

√1 + ω2(RdCd)2

.

Page 81: Field Guide to Infrared Systems

66 Infrared System Design

Bolometers

Thermal detectors that change their electrical resistance asa function of temperature are called bolometers or ther-mistors. Bolometer-semiconductor elements are thin chipsmade by sintering a powdered mixture of ferrous oxidessuch as manganese, nickel and/or cobalt, which have tem-perature coefficients of resistance of the order of 4.2% perdegree Celsius. These chips are mounted on a dielectricsubstrate that is, in turn,mounted on a metallic heatsink to provide high speed ofresponse and dissipate biascurrent power. After assem-bly, the sensitive area isblackened to improve the emissivity to IR radiation.

The resistance of a semiconductor varies exponentiallywith temperature:

Rd = RoeC(1/T−1/To),

where Ro is the ambient resistance at a nominal tempera-ture To, and C is a material characteristic (C = 3400 K for amixture of manganese, nickel, and cobalt).

The resistance change that results from the optically in-duced temperature change is obtained by differentiation,yielding a temperature coefficient of:

α = 1Rd

dRd

dT= − C

T2 .

When infrared radiation is absorbed into the bolometer,its temperature rises slightly causing a small decrease inresistance. In order to produce an electrical current fromthis change in resistance, a bias voltage must be appliedacross the bolometer. This is accomplished by interfacingtwo identical bolometer chips into a bridge circuit.

Page 82: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 67

Bolometers (cont’d)

The chip exposed to radiation is called the active chip, whilethe other is shielded from input radiation and is called thecompensation chip.Setting up the expression that equates the heat inflow andheat outflow:

Hd�T

dt+ K�T = εφe.

Assuming that the radiant power incident on the active de-vice is periodic φe = φe,oejωt; the heat-balance differentialequation provides the following solution:

|�T| = εφe,o

K√

1 + ω2τ2th

⇒ τth ≡ HK

.

The radiation-induced change in resistance is thendRd

Rd= α�T = αεφe,o

K√

1 + ω2τ2th

.

The bias current flowing through the active bolometer pro-duces a change in the output voltage vA given by

dvA = ibiasdRd = vbias

2dRd

Rd.

Therefore, the voltage responsivity becomes

Rv = αεvbias

2K√

1 + ω2τ2th

.

Note that if K decreases, Rv increases; however, τth alsoincreases yielding a lower cutoff frequency.At small bias voltages, the bolometer obeys Ohm’s law;however, as the bias voltage is increased, the self-heating ofthe chip due to the bias current causes a decrease in resis-tance and further increases the bias current. Eventually,there is a point where the detector burns out unless thecurrent is limited in some manner.Since the primary noise in a bolometer is Johnson noise,the NEP and D∗ are stated as

NEP =4K

√1 + ω2τ2

th

√kTRd�f

αεvbias, D∗ = αεvbias

√Ad

4K√

1 + ω2τ2th

√kTRd

.

Page 83: Field Guide to Infrared Systems

68 Infrared System Design

Bolometers: Immersion Optics

The detectivity of a bolometer is inversely proportional tothe square root of its area; therefore it is desirable to makeuse of an immersion lens with high refractive index nlens tominimize the radiation on the sensing area. By increasingnlens, the detector area is decreased; the size of the entrance

pupil and the ray angle in object spaceremain constant. However, the limit tosuch compression is regulated by the op-tical invariant and Abbe’s sine conditiongiven by

n2lensAdet sinθ′ = n2

oAenp sinθ.

This hemispherical immersion lens is used with the bolome-ter located at the center of curvature. This is an aplanaticcondition, thus no spherical aberration or coma is producedby the lens. A hyperhemi-spheric lens can also beused alternatively, and be-comes aplanatic when thedistance between the detec-tor and center of curvatureis noR/nlens.In a germanium hemispherical immersion lens, wherenlens = 4, the detector area is reduced by a factor of 16,which theoretically increases its detectivity by a factor of 4.This full gain is limited and cannot be practically achieved;however, if the immersion lens is antireflection coated, D∗improves to ∼3.5.An adhesive layer glues the chip and the immersion lenstogether. The material for this layer must have good in-frared transmission, high-quality electrical and thermal in-sulation properties, a high dielectric strength to preventbreakdown under the bias voltage, and a high refractive in-dex to optically match the immersion lens to the bolometer.Arsenic-doped amorphous selenium or Mylar may be used.The index of refraction of selenium is 2.5, while that of thebolometer materials is ∼2.9. When a germanium immer-sion lens is used, total internal reflection (TIR) occurs atthe selenium when the angle of incidence exceeds 38 deg.Suitable optical design techniques must be used to avoidthis situation.

Page 84: Field Guide to Infrared Systems

Performance Parameters for Optical Detectors 69

Thermoelectic Detectors

A thermoelectric detector or thermocouple is com-prised of two junctions between two dissimilar conductorshaving a large difference in their thermoelectric power (i.e.,a large Seebeck coefficient �). The hot junction is an effi-cient absorber exposed to incident radiation, while the coldjunction is purposely shielded.

To obtain efficient and large electrical conductivity σ, boththe thermal conductivity K and the Joulean heat loss mustbe minimized. This is achieved by maximizing the coeffi-cient σ�2/K found in some heavily doped semiconductors.

The voltage output between the two dissimilar materials isincreased by connecting numerous thermocouples in series;a device called radiation thermopile. The responsivity ofa thermopile is given by

Rv = ε�N

K√

1 + ω2τ2th

,

where N is the number of thermocouples in electrical se-ries.

The thermopile devicemay then be interfacedto an operational ampli-fier circuit to increase thevoltage to usable levels.

Thin-film techniques en-able chip thermopiles to be fabricated as complex arrayswith good reliability.

Page 85: Field Guide to Infrared Systems

70 Infrared System Design

Raster Scan Format: Single-Detector

Scanning mechanisms are often necessary in infrared sys-tems to cover a 2D FOV with a reasonable number of detec-tor elements, or when a substantial FOV is required. Themany applications that need scanning usually depend onopto-mechanical elements to direct and focus the infraredradiation. There are two basic types of scanners: preobjec-tive or parallel beam scanner, and post-objective or con-verging beam scanner.

In parallel scanning, the scan-ning element is out in front ofthe final image-forming element,and must be the entrance pupil ofthe optical system. The converg-ing scanner has the moving ele-ment between the final optical el-ement and the image, and workson axis.

There are three basic scan for-mats: raster, parallel, and thestaring focal plane array.

In the raster scan mechanisms, asingle detector is scanned in two

orthogonal directions in a 2D raster across the FOV. Themoving footprint sensed by the detector is called the in-stantaneous field-of-view (IFOV), and the specific timerequired for the footprint to pass across the detector iscalled the dwell time (τdwell).

One-hundred percent scan efficiency (ηscan) is assumed.Scan inefficiencies include overlap between scan lines, overscanning of the IFOV beyond the region sensed, and finiteretrace time to move the detector to the next line.

Page 86: Field Guide to Infrared Systems

Infrared Systems 71

Raster Scan Format: Single-Detector (cont’d)

The number of horizontal lines to make up a 2D scene isgiven by

nlines = VFOVVIFOV

.

The time taken to scan one particular line is

τline = τframe

nlines= τframe

VFOVVIFOV

.

The dwell time is the line time divided by the number ofhorizontal pixels contained in that line:

τdwell = τline

HFOV/HIFOV.

The scan velocity and the dwell time can be written as

vscan = HFOVτline

⇒ τdwell = HIFOVvscan

.

The dwell time can be also interpreted as the frame timedivided by the total number of pixels within the 2D FOV:

τdwell = τframe

(VFOV/VIFOV) · (HFOV/HIFOV)= τframe

npixels,

where the frame time can be found by

τframe = nlines · τline = VFOVVIFOV

· HFOVvscan

.

The electronic bandwidth can be written in terms of thedwell time as

�f = 12τdwell

.

A scanning system that covers the entire FOV with asingle-detector considerably lowers the duration that thesensing element remains on a particular IFOV; hence re-sulting in a higher noise-equivalent bandwidth.

A longer dwell time is obtained using a multiple-detectorsystem. In this case, the noise is reduced by the square rootof the number of sensing elements, thus improving the SNRof the infrared system.

Page 87: Field Guide to Infrared Systems

72 Infrared System Design

Multiple-Detector Scan Formats:Serial Scene Dissection

Serial scan uses multiple sensors along the scan direction,in such a way that each point in the image is scanned by alldetectors.

The number of detectors usedin a practical system varies be-tween two and ten.

The main mechanism used toimplement a serial scan iscalled time delay and inte-gration (TDI). TDI requires asynchronized delay line (typi-cally a charge-coupled device), to move the signal chargealong with the optical-scan motion. A particular IFOV islooked nd times; being nd the number of detectors in seriesdissecting the overall system’s FOV of the scanned system.The output charge from each detector is added together asthe serial scan moves on. As a result, the amplitude of theadded charge signal is incremented nd times, and the un-correlated noise, since it is added in quadrature, is inten-sified by a factor of

√nd. Thereby, the overall SNR of the

system is improved by the square root of the number ofsensor elements.

Advantage: The nonuniformity of the detectors is im-proved.

Disadvantages: High-mirror speeds are necessary tocover the 2D FOV. TDI circuit increases the weight andpower of the electronics subsystem.

Assumption: The√

nd increment in the SNR assumes thatall the detectors are identical in noise and level responsiv-ity. The practical result is around 10% short of the ideal.

Page 88: Field Guide to Infrared Systems

Infrared Systems 73

Parallel Scene Dissection

Parallel scanning uses multiple sensors in cross-scan di-rections. For a given fixed frame time, a slower scan is usedsince multiple vertical lines are covered at once.If nd < VFOV/VIFOV, a 2D raster is required, with the scanformat such that any given detector will drop nd × VIFOV.

If there are sufficient detectorsto cover a full line only hori-zontal scan motion is required.Advantage: Lower mirrorspeeds are required.Disadvantage: D∗ variationsproduce image nonuniformi-ties.

In second generation forward-looking infrared (FLIR) im-agers, TDI/parallel scan is used to perform 2:1 interlacing.A full line is stored and summed to the next line. Here, TDIis applied along the scan direction, and all the detectors arepreamplified, processed, and displayed.

For a system with a fixed frame time, an nd sensor has atime line of

τline = τframe

nlines/nd= τframend

VFOVVIFOV

,

where, longer dwell time is achieved by a factor of nd yield-ing

τdwell = τline

HFOV/HIFOV= τframend

(VFOV/VIFOV) · (HFOV/HIFOV)= τframend

npixels.

Bandwidth decreases inversely proportional to nd, and thenoise is proportional to the square root of the bandwidth,yielding

�f = 12τdwell

= npixels

2τframend⇒ vn ∝ √

�f =√

npixels

2τframend.

Therefore, the overall SNR increases by SNR ∝ √nd .

Page 89: Field Guide to Infrared Systems

74 Infrared System Design

Staring Systems

Staring systems cover the 2D FOV in its entirely, so thenumber of detector elements is equal to resolution elementsin the image. As a result, the dwell time is equal to theframe time of the system increasing the SNR significantly:

τdwell = τframe.

Each detector reduces thebandwidth because of the in-crement in the dwell time, sothe SNR increases by a factorof

√nd.

Nonuniformities and deadpixels are implicit in a staringarray.The SNR square root depen-

dence can be used to compare the potential performance ofthe various system configurations. For example, a 320×256staring array produces a SNR that is higher by a factor of25.3 in comparison to a parallel scanned system with a lin-ear array of 128 detectors.

Staring Systems Scanning Systems

Good SNR Low SNR

No moving parts Moving parts

Uniformity problems Good uniformity

More complex electronically More complex mechanically

Under-sampling problems Prone to line-of-sight jitter

More prone to aliasing Need more flux for a given SNR

Lower bandwidth for a given τframe Higher bandwidth for a given τframe

D* for the array is lower Good D∗ for individual detectors

Expensive Cheaper

Page 90: Field Guide to Infrared Systems

Infrared Systems 75

Search Systems and Range Equation

Search systems are also called detection, warning, orgo-no-go systems. Their intent is to detect and locate atarget that has a prescribed minimum intensity within aprescribed search volume. They operate on an unresolvedpoint-source basis (that does not fill the IFOV of the sys-tem); therefore, the spectral radiant intensity [W/ster·μm]is the variable of interest.The principal concern is to assess the maximum SNR re-quired for specified values of the probability of correct de-tection, and minimizing false-alarm rate. The result is astatistical decision about the existence or state of a tar-get phenomenon within a search volume. Linear fidelity isunimportant for these systems because they do not producean image.The objective is to establish the maximum range at whichan infrared search system can detect or track a point sourcetarget. The range equation states the distance at whicha given point source can be detected and establishes thedesign tradeoffs available.

The amount of flux reaching the detector as a function ofthe radiant intensity is

φd = I · �optτoptτatm = I · Aopt

r2 τoptτatm,

where τopt is the optical transmittance and τatm is the at-mospheric transmittance between the point source and thesearch system. The signal voltage from the detector is givenby

vs =Rv · φd =RvI · Aopt

r2 τoptτatm.

The SNR is found by dividing each side of the equation bythe rms value of the noise from the detector, yielding

SNR = vs

vn= Rv

vn

I · Aopt

r2 τoptτatm.

Page 91: Field Guide to Infrared Systems

76 Infrared System Design

Search Systems and Range Equation (cont’d)

Using the definition of NEP, and recasting in terms of D∗:

SNR = 1NEP

I · Aopt

r2 τoptτatm = D∗√Ad�f

I · Aopt

r2 τoptτatm,

where �f is the noise equivalent bandwidth. Recasting interms of F/# and the IFOV, and solving for the range

r =√√√√ ID∗

SNR√

�f

πD2opt

4f√

�dτoptτatm =

√ID∗

SNR√

�f

πDopt

4F/#√

�dτoptτatm,

where Dopt is the diameter of the entrance pupil and f is theeffective focal length of the optics. When the range equationis used to find the maximum detection or tracking range,the SNR is the minimum required for the system to workappropriately.

To analyze how the various factors affect the detectionrange, the range equation is regrouped in terms of optics,target and atmospheric transmittance, detector, and signalprocessing, respectively, yielding

r =√

πDoptτopt

4F/#

√Iτatm

√D∗

√1

SNR√

�d�f.

In the first term, the diameter, the speed of the optics,and the transmittance characterize the optics. The first twoterms are written separately to facilitate their independentsolutions in the tradeoff process. The range is directly pro-portional to the square root of Dopt, where the bigger theoptics, the more flux is collected. However, scaling up theentrance pupil changes the F/# of the system, and requiresa corresponding increase in both the focal length and thelinear size of the detector to maintain the original FOV.

The second term contains the radiant intensity of the tar-get and the transmittance along the line of sight. Theamount of attenuation caused by the atmosphere, and theshot-noise contribution from the background can be opti-mized by choosing the best spectral band for the specificoptical system. For example, if the emitting flux from thetarget is high, the spectral band that yields the best con-trast is selected; however, if the flux is low, the spectralband that produces the optimum SNR is selected.

Page 92: Field Guide to Infrared Systems

Infrared Systems 77

Search Systems and Range Equation (cont’d)

The third factor pertains to the characteristics of the detec-tor. The range is proportional to the square root of the nor-malized detectivity of the detector. Therefore, an incrementin the detection range can be achieved by enhancing thesensitivity of the system using serial TDI approaches, orby effectively shielding the detector from background radi-ation. Also notice that since the radiation is collected froma point source, increasing the area of the detector reducesthe SNR of the system.

The final factor describes the range in terms of the sig-nal processing parameters. It shows that decreasing ei-ther the FOV or the noise equivalent bandwidth slowly in-creases the range because of the inverse fourth root depen-dence. The product �f�d represents the angular scan ratein steradian per second, increasing the integration timeof the system averages away random noises resulting inlonger detection ranges. The SNR in this type of system isinterpreted as the minimum SNR required to reach a de-tection decision with an acceptable degree of certainty. Forexample, if the search system requires a higher SNR to im-prove the probability of correct detection, the system willhave a shorter range.

The range equation for BLIP search systems is obtainedby substituting D∗

BLIP for D∗, which in photon-derived unitstranslates to

rBLIP =√

πDoptτopt

4

√Iτatm

λ

hc

√2η

πLbkg

√1

SNR√

�d�f.

Note that the F/# term has dropped out of the equation. Soa BLIP search system is influenced by the diameter of itsoptics but not by its speed.

There are several design concepts that can be used to de-crease the background noise: the detector cold stop is de-signed so that it produces a 100% cold efficiency; a spec-tral filter is used to limit the spectral pass band to themost favorable region to the corresponding target flux;avoid the wavelengths at which the atmosphere absorbsstrongly; minimize the emissivity of all the optical andopto-mechanical components, and, if necessary, cool the el-ements by the detector’s angular subtense.

Page 93: Field Guide to Infrared Systems

78 Infrared System Design

Noise Equivalent Irradiance

The noise equivalent irradiance or better known asNEI is one of the main descriptors of infrared warning de-vices. It is the flux density at the entrance pupil of the op-tical system that produces an output signal equal to thesystem’s noise (i.e., SNR = 1). It is used to characterize theresponse of an infrared system to a point source target.The irradiance from a point source target is given by

E = φ

Aopt= I · �opt

Aopt= I

r2 ,

substituting the range expression and setting the SNRequal to 1

NEI = E|SNR=1 =√

Ad�fAoptD∗τoptτatm

= NEPAopt

1τoptτatm

.

Recasting in terms of F/# and the IFOV

NEI = 4F/#√

�d�fπDoptD∗τoptτatm

.

In BLIP conditions the NEI is independent of the F/#,yielding:

NEIBLIP = 4hcλAoptτoptτatm

√Lp,bkg�d�f

2πη.

NEI is especially useful when it is plotted as a function ofwavelength. Such plot defines the irradiance at each wave-length necessary to give a signal output equal to the sys-tem’s rms noise. It can be interpreted either as an averagevalue over the spectral measuring interval, or as the peakvalue.Although NEI has a broader usage in characterizing theperformance of an entire system, it also may be used toevaluate the performance of a detector alone. In this case,it is defined as the radiant flux density necessary to pro-duce an output signal equal to the detector noise, and com-pares the ability of different sized devices to detect a givenirradiance.

NEI = ESNR

= NEPAd

.

Page 94: Field Guide to Infrared Systems

Infrared Systems 79

Performance Specification: Thermal-ImagingSystems

A thermal imaging system (TIS) collects, spectrally fil-ters, and focuses the infrared radiation onto a multiele-ment detector array. The detectors convert the optical sig-nals into analog signals, which are then amplified, digi-tized, and processed for display on a monitor. Its main func-tion is to produce a picture that maps temperature differ-ences across an extended-source target; therefore, radianceis the variable of interest.

Two parameters are measured to completely specify TISand produce good thermal imagery: thermal sensitivityand spatial resolution. Spatial resolution is related tohow small an object can be resolved by the thermal system,and thermal sensitivity concerns the minimum tempera-ture difference discerned above noise level.

Modulation transfer function (MTF): characterizesboth the spatial resolution and image quality of an imagingsystem in terms of spatial frequency response. The MTF isa major parameter used for system specifications and de-sign analysis.

Noise-equivalent temperature difference (NETD):measures the thermal sensitivity of TIS. While the NETDis a useful descriptor that characterizes the target-to-background temperature difference, it ignores the spatialresolution and image quality of the system.

Minimum resolvable temperature difference(MRTD): a subjective measurement that depends on theinfrared imaging system’s spatial resolution and thermalsensitivity. At low spatial frequencies, the thermal sensi-tivity is more important; while at high spatial frequencies,the spatial resolution is the dominant factor. The MRTDcombines both the thermal sensitivity and the spatial res-olution into a single measurement. The MRTD is not anabsolute value but is a perceivable temperature differen-tial relative to a given background.

Johnson criteria: another descriptor that accounts forboth the thermal sensitivity and spatial resolution. Thistechnique provides a real way of describing real targets interms of simpler square-wave patterns.

Page 95: Field Guide to Infrared Systems

80 Infrared System Design

MTF Definitions

Spatial frequency is defined as the reciprocal crest-to-crest distance (i.e., the spatial period) of a sinusoidal wave-front used as a basic function in the Fourier analysis ofan object or image. Typically it is specified in [cycles/mm]in the image plane, and in angular spatial frequency [cy-cles/milliradian] in object space. For an object located atinfinity, these two representations are related through thefocal length (f ) of the image-forming optical system.

ξang,obj[cycles/mrad] = ξimg[cycles/mm] × f [mm]103

The imagequality ofan opticalor electro-optical sys-tem is char-acterized ei-ther by thesystem’s im-pulse re-sponse orits Fouriertransform,the transfer

function. The impulse response h(x,y) is the 2D imageform in response to an impulse or delta-function object.Because of the limitations imposed by diffraction and aber-rations, the image quality produced depends on the wave-length distribution of the source, the F/# at which the sys-tem operates, the field angle at which the point is located,and the choice of focus position.

A continuous object f (x,y) is decomposed using the shiftingproperty of delta functions, into a set of point sources, eachwith a strength proportional to the brightness of their orig-inal object at that location. The final image g(x,y) obtainedis the superposition of the individual weighted impulse re-sponses. This is equivalent to the convolution of the objectwith the impulse response:

g(x,y) = f (x,y) ∗ ∗h(x,y),

where the double asterisk denotes a 2D convolution.

Page 96: Field Guide to Infrared Systems

Infrared Systems 81

MTF Definitions (cont’d)

The validity of this requires shift invariance and linear-ity; a condition called isoplanatism. These assumptionsare often violated in practice; however, to preserve the con-venience of a transfer function analysis, the variable thatcauses nonisoplanatism is allowed to assume a set of dis-crete values. Each set has separate impulse responses andtransfer functions. Although h(x,y) is a complete specifica-tion of image quality, additional insight is gained by useof the transfer function. A transfer-function analysis con-siders the imaging of sinusoidal objects, rather than pointobjects. It is more convenient than the impulse-responseanalysis because the combined effect of two or more sub-systems can be calculated by a point-by-point multiplica-tion of the transfer functions, rather than convolving theindividual impulse responses.Using the convolution theorem of Fourier transforms, theproduct of the corresponding spectra is given by

G(ξ,η) = F(ξ,η) × H(ξ,η),

where F(ξ,η) is the object spectrum, G(ξ,η) is the imagespectrum, and H(ξ,η) is the transfer function, which is theFourier transform of the impulse response. ξ and η are spa-tial frequencies in the x and y directions, respectively.The transfer function H(ξ,η) is normalized to have a unitvalue at zero spatial frequency. This normalization is ap-propriate for optical systems because the transfer functionof an incoherent optical system is proportional to the 2Dautocorrelation of the exit pupil, and the autocorrelationis the necessary maximum at the origin. In its normalizedform, the transfer function H(ξ,η) is referred to as the op-tical transfer function (OTF), which plays a key role inthe theoretical evaluation and optimization of an opticalsystem as a complex function that has both a magnitudeand a phase portion:

OTF(ξ,η) = H(ξ,η) = |H(ξ,η)| · ejθ(ξ,η).

The absolute value or magnitude of the OTF is the MTF,while the phase portion of the OTF is referred to as thephase transfer function (PTF). The system’s MTF andPTF alter the image as it passes through the system.

Page 97: Field Guide to Infrared Systems

82 Infrared System Design

MTF Definitions (cont’d)

For linear-phase-shift invariant systems, the PTF is of nospecial interest since it indicates only a spatial shift withrespect to an arbitrary selected origin. An image in whichthe MTF is drastically altered is still recognizable, wherelarge nonlinearities in the PTF can destroy recognizabil-ity. PTF nonlinearity increases at high spatial frequencies.Since the MTF is small at high spatial frequencies, the lin-ear phase shift effect is diminished.The MTF is then the magnitude response of the imagingsystem to sinusoids of different spatial frequencies. Thisresponse can also be defined as the attenuation factor inmodulation depth:

M = Amax − Amin

Amax + Amin,

where Amax and Amin refer to the maximum and minimumvalues of the waveform that describe the object or image inW/cm2 versus position. The modulation depth is actually ameasure of visibility or contrast. The effect of the finite-sizeimpulse response (i.e., not a delta function) of the opticalsystem is to decrease the modulation depth of the imagerelative to that in the object distribution. This attenuationin modulation depth is a function of position in the imageplane. The MTF is the ratio of image-to-object modulationdepth as a function of spatial frequency:

MTF(ξ,η) = Mimg(ξ,η)

Mobj(ξ,η).

Page 98: Field Guide to Infrared Systems

Infrared Systems 83

Optics MTF: Calculations

The overall transfer function of an electro-optical systemis calculated by multiplying individual transfer functionsof its subsystems. The majority of thermal imaging sys-tems operate with broad spectral band-passes and de-tect noncoherent radiation. Therefore, classical diffractiontheory is adequate for analyzing the optics of incoherentelectro-optical systems. The OTF of diffraction-limited op-tics depends on the radiation wavelength and the shape ofthe entrance pupil. Specifically, the OTF is the autocorre-lation of the entrance pupil function with entrance pupilcoordinates x and y replaced by spatial frequency coordi-nates ξ and η, respectively. The change of variable for thecoordinate x is

ξ = xλdi

,

where x is the autocorrelation shift in the pupil, λ is theworking wavelength, and di is the distance from the exitpupil to the image plane. The image-spaced cutoff fre-quency with a full width D is

ξcutoff = 1λF/#

,

which is when the autocorrelation reaches zero. The sameanalytical procedure is performed for the y coordinate.A system without wave-distortion aberrations but acceptsthe image faults due to diffraction is called diffraction-limited. The OTF for such a near perfect system is purelyreal and nonnegative (i.e., MTF), and represents the bestperformance that the system can achieve, for a given F/#and λ.Consider the MTFs that correspond to diffraction-limitedsystems with square (width l) and circular (diameter D)exit pupils. When the exit pupil of the system is circular,the MTF is circularly symmetric, with ξ profile:

MTF(ξ) =

⎧⎪⎪⎨⎪⎪⎩

{cos−1(ξ/ξcutoff) − ξ/ξcutoff[1 − (ξ/ξcutoff)2]}1/2

for ξ ≤ ξcutoff

0 otherwise

.

Page 99: Field Guide to Infrared Systems

84 Infrared System Design

Optics MTF: Calculations (cont’d)

The square aperture has a linear MTF along the spatialfrequency ξ given by

MTF(ξ,η) = Mimg(ξ,η)

Mobj(ξ,η).

The MTF curve for a system with appreciable geomet-ric aberrations is upwardly bounded by the diffraction-limited MTF curve. Aberrations broaden the impulse re-sponse h(x,y), resulting in a narrower, lower MTF with asmaller integrated area. The area under the MTF curve re-lates to the Strehl-intensity ratio (SR), which measuresimage quality degradation, and is the irradiance at the cen-ter of the impulse response, divided by that at the center ofa diffraction-limited impulse response. Small aberrationsreduce the intensity at the principal maximum of the dif-fraction pattern, or the diffraction focus, and that of theremoved light is distributed to the outer parts of the pat-tern. Using the central-ordinate theorem for the Fouriertransforms, SR is written as the ratio of the area underthe actual MTF curve to that under the diffraction-limitedMTF curve yielding

SR =∫∫

MTFactual(ξ,η)dξdη∫∫MTFdiff-limited(ξ,η)dξdη

.

The Strehl ratio falls between 0 and 1, however, its usefulrange is ∼0.8 to 1 for highly corrected optical systems.Geometrical-aberration OTF is calculated from ray-tracedata by Fourier transforming the spot-density distributionwithout regard for diffraction effects. The OTF is obtainedaccurately if the aberration effects dominate the impulse-response size. The OTF of a uniform blur spot is writtenas

OTF(ξ) = J1(πξdblur)

πξdblur,

where J1(·) is the first-order Bessel function, and dblur isthe diameter of the blur spot. The overall optics portionMTF of an infrared system is determined by multiplyingthe ray-trace data MTF with the diffraction-limited MTFof the proper F/# and wavelength.

Page 100: Field Guide to Infrared Systems

Infrared Systems 85

Electronics MTF: Calculations

Two integral parts of modern infrared imaging systems arethe electronic subsystems, which handles the signal andimage processing functions, and the sensor(s) of the imag-ing system. Characterization of the electronic circuitry andcomponents is well established by the temporal frequencyin hertz. In order to cascade the electronic and optical sub-systems, the temporal frequencies must be converted tospatial frequencies. This is achieved by dividing the tempo-ral frequencies by the scan velocity of the imaging device.In contrast to the optical transfer function, the electronicMTF is not necessarily maximized at the origin, and caneither amplify or attenuate the system MTF curve at cer-tain spatial frequencies.The detector MTF is expressed as

MTFd(ξ,η) = sinc(dhξ)sinc(dvη),

where dh and dv are the photosensitive detector sizes in thehorizontal and vertical directions, respectively. Althoughthe detector MTF is valid for all spatial frequencies, it istypically plotted up to its cutoff frequencies (ξ = 1/dh andη = 1/dv). The spatial Nyquist frequency (ξNy) of the de-tector array must be taken into consideration to preventaliasing effects.It is the combination of the optical and electronic responsesthat produce the overall system MTF, even though thedetector MTF usually becomes the limiting factor of theelectro-optical system since in general ξNy ξcutoff.

Page 101: Field Guide to Infrared Systems

86 Infrared System Design

MTF Measurement Setup and Sampling Effects

All opticaland electro-optical com-ponentscomprisingthe infrared-imaging sys-tem should

be placed on a vibration-isolated optical table. The aper-ture of the collimator should be large enough to overfill theaperture of the system under test. The optical axis of theinfrared camera has to be parallel to and centered on theoptical axis of the collimator, to insure that its entrancepupil is perpendicular to the collimator optical axis. Thedisplay gain and brightness should be optimized prior tothe start of the MTF measurements to assure that the dis-play setting is not limiting the performance of the detectorarray.

Sampling effects alter the MTF and affect the fidelity ofthe image. The discrete location of the detectors in the star-ing array creates the sampling lattice. Phasing effects be-tween the sampling lattice and the location of the targetintroduce problems at nearly all spatial frequencies. Dig-itization alters signal amplitude and distorts the pulseshape. Sampling causes sensor systems like focal planearrays (FPAs) to have a particular kind of shift variance(i.e., spatial phase effects); in which case, they depend onthe position of the target relative to the sampling grid tomeasure the MTF of the system.

Page 102: Field Guide to Infrared Systems

Infrared Systems 87

MTF Measurement Techniques: PSF and LSF

Different measurement techniques can be used to assessthe MTF of an infrared-imaging system. These includethe measurement of different types of responses such aspoint-spread function, line-spread function, edge-spreadfunction, sine-target response, square-target response, andnoiselike target response. All targets, except the ones thatare random, should be placed in a micro-positioning mountcontaining three degrees of freedom (x,y,θ) to account forphasing effects.

The imaging of a point source δ(x,y) of an optical systemhas an energy distribution called the point-spread func-tion (PSF). The 2D Fourier transform of the PSF yieldsthe complete 2D OTF(ξ,η) of the system in a single mea-surement. The absolute value of the OTF gives the MTF ofthe system. The impulse response technique is practicallyimplemented by placing a small pinhole at the focal pointof the collimator. If the flux passing through the pinholeproduces an SNR that is below a usable value, a slit tar-get can be placed at the focal plane of the collimating op-tics; the output is called the line-spread function (LSF).The cross-section of the LSF is obtained by integrating thePSF parallel to the direction of the line source, because theline image is simply the summation of an infinite numberof points along its length. The LSF only yields informationabout a single profile of the 2D OTF. Therefore, the absolutevalue of the Fourier transform of the LSF yields the 1DMTF(ξ,η) of the system.

To obtain other profiles of the MTF, the line target canbe reoriented as desired. The slit angular subtense mustbe smaller than the IFOV with a value of 0.1 IFOV. Thephasing effects are tested by scanning the line target rela-tive to the sampling sensor grid until maximum and mini-mum signals are obtained at the sensor. The measurementsare performed and recorded at different target positions,and averaging the output over all locations yields an aver-age MTF. However, this average MTF is measured using afinite-slit aperture, in which case, this undesirable compo-nent is removed by dividing out the Fourier transform ofthe finite slit, yielding a more accurate MTF.

Page 103: Field Guide to Infrared Systems

88 Infrared System Design

MTF Measurement Techniques: ESF and CTF

The MTF is also obtained from a knife-edge spread re-sponse (ESF), the response of the system under test to anilluminated knife-edge target. There are two advantages inusing this target over the line-target: it is simpler to buildthan a narrow slit, and there is no MTF correction. Theedge is differentiated to obtain the line-spread function andis then Fourier transformed. However, the derivative oper-ation accentuates the system noise presented in the data,which can corrupt the resulting MTF. The edge must bestraight with no raggedness. To increase the SNR for boththe line and edge-spread techniques, the 1D Fourier trans-form is averaged over all the rows of the image. In addition,reducing the system gain reduces noise, and the target sig-nal can be increased if possible.

The MTF is also obtained by measuring the system’s re-sponse to a series of sine-wave targets, where the imagemodulation depth is measured as a function of spatialfrequency. Sinusoidal targets can be fabricated on photo-graphic films or transparencies for the visible spectrum;however, they are not easy to fabricate for the testing ofinfrared systems due to materials limitations. A less ex-pensive, more convenient target is the bar target, a pat-tern of alternate bright and dark bars of equal width. Thesquare-wave response is called contrast transfer func-tion (CTF), and is a function of the fundamental spatialfrequency ξf of the specific bar target under test. The CTFis measured on the peak-to-valley variation of the image ir-radiance, and is defined as

CTF(ξf ) = Msquare-response(ξf )

Minput-square-wave(ξf ).

The CTF is higher than the MTF at all spatial frequen-cies because of the contribution of the odd harmonics of theinfinite-square wave test pattern to the modulation depthin the image. The CTF is expressed as an infinite seriesof MTFs. A square wave can be expressed as a Fourier-cosine series. The output amplitude of the square wave atfrequency ξf is an infinite sum of the input cosine ampli-tudes modified by the system’s MTF:

Page 104: Field Guide to Infrared Systems

Infrared Systems 89

MTF Measurement Techniques: ESF and CTF(cont’d)

CTF(ξf ) = 4π

[MTF(ξf ) − 1

3MTF(3ξf ) + 1

5MTF(5ξf ) − 1

7MTF(7ξf ) + · · ·

],

conversely, the MTF can be expressed as an infinite sum ofCTFs as

MTF(ξf ) = π

4

[CTF(ξf ) + 1

3CTF(3ξf ) − 1

5CTF(5ξf ) + 1

7CTF(7ξf ) + · · ·

].

Optical systems are characterized with three- and four-bar targets and not by infinite square-wave cycles. There-fore, the CTF might be slightly higher than the CTF curvefor an infinite square wave. For bar targets with a spatialfrequency above one-third the cutoff frequency, where theMTF approaches zero, the MTF is equal to π/4 times themeasured CTF. Electronic nonlinearity, digitization effects,and sampled-scene phase effects can make these MTF andCTF measurements difficult.

The MTF of optics is measured, without including the de-tector MTF, by placing a microscope objective in front of thedetector FPA. The microscope objective is used as a relaylens to reimage the system’s response formed by the opticsunder test onto the FPA with the appropriate magnifica-tion. Here, the detector is no longer the limiting componentof the imaging system, since its MTF response becomes ap-preciably higher than the optical MTF curve. The micro-scope objective must be high quality to reduce degradationof the measured response function, and have high enoughNA to capture the entire image-forming cone angle.

Imaging systems containing a detector FPA are nonisopla-natic, and their responses depend on the location of the de-terministic targets relative to the sampling grid, introduc-ing problems at nearly all spatial frequencies. The use ofrandom-target techniques for measuring the MTF of a dig-ital imaging system tends to average out the phase effects.

Page 105: Field Guide to Infrared Systems

90 Infrared System Design

MTF Measurement Techniques: Noiselike Targets

Using noiselike test targets of known spatial frequency con-tent allow measurement of shift-invariant MTF becausethe target information is positioned randomly with respectto the sampling sites of the digital imaging system.

The MTF of the system is calculated because the inputpower density PSDinput(ξ) of the input random pattern isknown, and an accurate estimate of the output power den-sity PSDoutput(ξ) is made from the FPA response. The MTFis then calculated from the following relationship:

MTF(ξ) =√

PSDoutput(ξ)

PSDinput(ξ).

This approach is commonly used to characterize time do-main electrical networks, and its application to the MTFtesting of digital imaging systems provides an average ofthe shift variation, which eases alignment tolerances andfacilitates MTF measurements at spatial frequencies be-yond Nyquist.

Two different techniques are used for the generation of ran-dom targets: laser speckle and transparency-based noisetargets. The former is used to characterize the MTF ofFPAs alone, while the latter one is used to characterize theMTF of a complete imaging system (i.e., imaging optics to-gether with the FPA).

A laser speckle pattern of known PSD is generated by theillustrated optical train.

The integrating sphere produces a uniform irradiance witha spatially random phase. The aperture following the inte-grating sphere (typically a double slit is used), determinesthe PSDinput(ξ) of the speckle pattern at the FPA, which isproportional to the aperture transmission function.

Page 106: Field Guide to Infrared Systems

Infrared Systems 91

MTF Measurement Techniques: Noiselike Targets(cont’d)

The spatial frequency of the resulting narrowband specklepattern can be tuned by changing the aperture-to-focal-plane distance z. The MTF is calculated from the relativestrength of the sideband center of the PSDoutput(ξ).

To characterize a complete imaging system, a 2D uncor-related random pattern with uniform band-limited white-noise distribution is created using a random generator com-puter algorithm. This random gray level pattern is printedonto a transparency and placed in front of a uniform radi-ant extended source, producing a 2D radiance pattern withthe desired input power spectrum PSDinput.

The output spectral density is estimated by imaging thetarget through the optical system onto the FPA. The out-put data is then captured by a frame grabber and processedto yield the output power spectrum PSDoutput(ξ) as the ab-solute value squared of the Fourier transform of the outputimage data. The MTF is then calculated using the equationfrom the previous page.

In the infrared region, the transparency must be re-placed by a random thermoscene made of a chrome depo-sition on an infrared material substrate. Microlithographicprocesses enable production of square apertures of varioussizes on a 2D matrix to achieve the desirable random pat-tern. To avoid diffraction-induced nonlinearities of trans-mittance, the minimum aperture size must be five timesthe wavelength.

Page 107: Field Guide to Infrared Systems

92 Infrared System Design

MTF Measurement Techniques: Interferometry

Common path interferometers may be employed formeasuring the transfer functions of optical systems. An in-terferogram of the wavefront exiting the system is reducedto find the phase map. The distribution of amplitude andphase across the exit pupil contains the necessary informa-tion for calculating the OTF by pupil autocorrelation.

The performance of a lens at specific conjugates can bemeasured by placing the optical element in one of the armsof an interferometer. The process begins by computing asingle wrapped phase map from the resultant wavefront in-formation or optical path difference (OPD) exiting thepupil of the system under test.

The wrapped phase map is represented in multiples of2π with phase values ranging from −π to π. Removalof the 2π modulus is accomplished by using an unwrap-ping algorithm, thus producing an unwrapped phase mapalso known as the surface map. The PSF is obtained

by multiplying the con-jugate Fourier trans-form of the surfacemap data (i.e., elementby element multiplica-tion of the amplitudecomplex function). Theinverse Fourier trans-form of the PSF yieldsthe complex OTF, withthe modulus that cor-responds to the MTF ofthe optical system.

In summary, the MTF is a powerful tool used to charac-terize imaging system’s ability to reproduce signals as afunction of spatial frequency. It is a fundamental parame-ter that determines where the limitations of performancein optical and electro-optical systems occur, and which cru-cial components must be enhanced to yield a better overallimage quality. It guides system design, and predicts systemperformance.

Page 108: Field Guide to Infrared Systems

Infrared Systems 93

Noise Equivalent Temperature Difference

Noise equivalent temperature difference (NETD) isthe target-to-background temperature difference that pro-duces a peak signal-to-rms-noise ratio of unity. Its analyti-cal formula is given by:

NETD = 4π

[F2

/#

√�f

D∗√Ad∂L/∂T

],

where �f is the electronic bandwidth, D∗ and Adet are re-spectively the normalized detectivity and the effective areaof the detector, and partial derivative of the radiance withrespect to temperature is the exitance contrast. This equa-tion applies strictly to detector-limited situations.

A smaller NETD indicates better thermal sensitivity. Forthe best NETD, D∗ is peaked near the wavelength of max-imum exitance contrast of the source. A smaller F/# col-lects more flux, yielding a more accurate estimate NETD.A smaller electronic bandwidth yields a larger dwell time,obtaining a smaller noise voltage and lowering the NETD.A larger detector area gives a larger IFOV, collecting moreflux and resulting in a better NETD. The drawback ofNETD as a system-level performance descriptor is thatwhile the thermal sensitivity improves for larger detectors,the image resolution deteriorates. Thus, while the NETDis a sufficient operational test, it cannot be applied as a de-signed criterion.

When the system operates under BLIP conditions, theequation for NETD becomes

NETDBLIP = 2√

2√π

hcλ

[F/#

√�f

√Lbkg√

Ad√

η∂L/∂T

],

where λ is the wavelength, h is the Planck constant, c isthe velocity of light in vacuum, Lbkg is the background ra-diance, and η is the quantum efficiency of the detector. No-tice that the NETD is inversely proportional to the squareroot of the quantum efficiency and proportional to thesquare root of the in-band background flux. Under BLIPconditions, it has a linear dependence on F/# rather thana square dependence as in the detector-limited conditioncase.

Page 109: Field Guide to Infrared Systems

94 Infrared System Design

NETD Measurement Technique

The NETD measurement is usually carried out using asquare target. The size of the square must be several timesthe detector angular substance (i.e., several IFOV’s) of theextended source to ensure that the spatial response of thesystem does not affect the measurement. This target is usu-ally placed in front of an extended area blackbody source, sothat the temperature difference between the square targetand the background is several times the expected NETDto ensure a response that is clearly above the system noise.The peak signal and rms noise data are obtained by captur-ing, averaging, and taking the standard deviation of sev-eral images. The NETD is then calculated from the experi-mental data as follows:

NETD = �TSNR

,

where �T = Ttarget − Tbkg, and the SNR is the signal-to-noise ratio of the thermal system.

Care must be taken to ensure that the system is operatinglinearly and that no noise sources are included. Because ofthe dependence of noise on bandwidth, and to obtain theproper dwell time and bandwidth, the NETD must be mea-sured with the system running at its full operational scanrate.

Page 110: Field Guide to Infrared Systems

Infrared Systems 95

Minimum Resolvable Temperature Difference

The minimum resolvable temperature difference(MRTD) simultaneously characterizes both the spatialresolution and the thermal sensitivity. It is a subjectivemeasurement for which the SNR-limited thermal sen-sitivity is outlined as a function of spatial frequency.

Conceptually, the MRTD is theimage SNR required for an ob-server to resolve four-bar targetsat several fundamental spatialfrequencies ξf , so that the barsare just discernable by a trainedobserver with unlimited view-ing time. The noise-limited ratio-nale is essential in this case, be-cause an infrared imaging sys-tem displays its utmost sensitiv-ity when the highest noise is vis-ible to the observer (i.e., gain isincreased to compensate for ad-verse atmospheric and/or sceneconditions).

These tests depend on decisionsmade by the observer. The re-sults vary with training, moti-vation, and visual capacity, aswell as the environmental set-ting. Because of the considerableinter- and intra-observer variabil-

ity, several observers are required. The underlying distrib-ution of observer responses must be known, so that the in-dividual responses can be appropriately averaged together.

MRTD is a better system-performance descriptor than theMTF alone because the MTF measures the attenuation inmodulation depth, without regard for a noise level. MRTDis also a more complete measurement than the NETD be-cause it accounts for both spatial resolution and noise level.Therefore, the MRTD is a useful overall analytical and de-sign tool that is indicative of system performance.

Page 111: Field Guide to Infrared Systems

96 Infrared System Design

MRTD: Calculation

MRTD measures the ability to resolve detail imagery, andis directly proportional to the NETD and inversely propor-tional to the MTF:

MRTD ∝ NETDξt√

HIFOV · VIFOVMTF(ξt)

√τeye · τframe

,

where ξf is the spatial frequency of the target being ob-served, τeye is the integration time of the human eye, τframeis the frame time, MTF is the transfer function of the sys-tem at that particular target frequency, and HIFOV andVIFOV are the horizontal and vertical IFOV’s of the sys-tem, respectively.

The derivation of an exact analytical expression for MRTDis complex because of the number of variables in the calcu-lation; therefore, computer-aided performance models suchas the NVTherm model are used. Substituting the NETDequation into the MRTD equation yields

MRTD ∝ ξt√

HIFOV · VIFOVMTF(ξt)

√τeye · τframe

× F2/#

√�f

D∗√Ad∂L/∂T.

MRTD depends on the same variables as NETD (i.e., F/#,�f , D∗, and radiance contrast). However, the thermal per-formance of the system cannot be increased by increasingthe area of the detector or IFOV; the MTF decreases athigher frequencies. Therefore, the amount of �T requiredfor a 4-bar target to be discernable increases as the sizeof the bars decrease. The MRTD increases when MTF de-creases, but the MRTD increases faster due to the extra

factor ξf in the numera-tor. The effect of the ob-server is included in the fac-tor τeyeτframe. Increasing theframe rate gives more obser-vations within the temporalintegration time of the hu-man eye, and then the eye-brain system tends to aver-age out some of the noise,leading to a lower MRTD.

Page 112: Field Guide to Infrared Systems

Infrared Systems 97

MRTD Measurement Technique

A generic MRTD test configuration is shown:

The four-bar target is located in front of the blackbodysource at the focal plane of the collimator to collimatethe radiation from each point of the surface of the tar-get. To achieve high-spatial frequency, the MRTD setupis mounted on a vibration-isolated optical table. Since theMRTD is a detection criterion for noisy imagery, the gainof the infrared imaging system must be set high enoughthat the image is noisy. Infrared imaging systems are sub-ject to sampling effects. The MRTD does not have a uniquevalue for each spatial frequency, but has a range of val-ues depending on the location of the target with respectto the detector array. Therefore, the targets must be ad-justed to achieve the best visibility. An observer must countthe number of bars to ensure that all four are present anddiscernable. The targets should range from low-spatial fre-quencies to just past the system cutoff, and span the entirespatial frequency response.

Problems associated with the MRTD measurements in-clude the related distance between the display screen andthe observer, background brightness, and strain. The con-trast sensitivity increases with background radiance; how-ever, during the MRTD tests, the observer can adjust thesystem’s gain and level, and monitor the brightness andcontrast to optimize the image for detection criterion. In-consistencies between the results obtained by different ob-servers can occur, and over a long period of time, the hu-man eye-brain sensitivity decreases, causing unreliability.The use of the MRTD is also somewhat limited because allthe field scenes are spectrally selective (i.e., emissivity is afunction of wavelength), while most MRTD tests are per-formed with extended-area blackbodies.

Page 113: Field Guide to Infrared Systems

98 Infrared System Design

MRTD Measurement: Automatic Test

It is of practical interest to measure the MRTD without theneed of a human observer. Automatic tests or objective testsare desirable because of an insufficient number of trainedpersonnel and because the subjective test is time consum-ing. The MRTD equation can be written as

MRTD =K(ξf )NETD

MTF(ξf ),

where the constant of proportionality and any spatial-frequency-dependent terms, including the effect of the ob-server, are taken up into the function K(ξf ). To charac-terize the average effects of the observer, for a given dis-play and viewing geometry, an MRTD curve is measuredfor a representative sample of the system under test. Alongwith the MRTD data, the NETD and MTF are measuredand recorded for the system. From these data, the functionK(ξf ) can be determined, and subsequent tests of similarsystems can be performed without the observer.

A comprehensive automatic laboratory test station, whichprovides the means to measure the performance of an in-frared imaging system, and a field-tester apparatus mea-suring the FLIR parameters of an Apache helicopter, isshown below.

Page 114: Field Guide to Infrared Systems

Infrared Systems 99

Johnson Criteria

The Johnson criteria accounts for both the thermal sen-sitivity and the spatial resolution of a thermal imaging sys-tem. It provides a way of discriminating real targets interms of equivalent bar chart resolvability patterns.

Eight military targets and a standing man were placed infront of television imagery. Sets of square-wave patternswere placed along these targets. These square arrange-ments have the same apparent �T as the military tar-gets and are viewed under the same conditions. The the-ory relating equivalent bar target resolvability to targetdiscrimination is that the level of discrimination (i.e., de-tection, classification, recognition, or identification), can bepredicted by determining the number of resolved cycleswithin the equivalent chart that would fit across the mini-mum dimension of the target. A more complex decision taskrequires finer spatial resolution.

The target remains the same in all cases, and it is por-trayed as having an average apparent blackbody tempera-ture difference �T between the target and the background.The image quality limitations to performance are classifiedin the table.

Degradation Performance DiscriminationLimited Level

1 Random noise-limited Detection2 Magnification-limited Classification3 MTF-limited Recognition4 Raster-limited Identification

Page 115: Field Guide to Infrared Systems

100 Infrared System Design

Johnson Criteria (cont’d)

Once the required numbers of cycles for a particular taskare determined, the required angular spatial frequency incycles/rad is calculated by

ξ = ncycles

xmin/r

where ncycles is the number of cycles, xmin is the minimumtarget dimension, r is the range; and therefore, xmin/r is theangular subtense of the target.To discriminate a target, two IFOVs are required per cycleof the highest spatial frequency (i.e., Nyquist sampling the-orem). Therefore, the IFOV can be written in terms of theJohnson parameters as

12 · FOV

= f2√

Ad= ncycles

xmin/r⇒ f√

Ad= 2rncycles

xmin,

where f is the effective focal length of the optical systemand Ad is the area of the detector.

This information allows setting the resolution require-ments for the system, including the detector size, focallength of the optical system, the dimensions of the target,and the target distance.

The detection of simple geometrical targets embedded inrandom noise is a strong function of the SNR when all otherimage quality parameters are held constant. Classificationis important because, for example, if a certain type of ve-hicle is not supposed to be in a secured area, it is not onlynecessary to detect the target but to classify it before fir-ing on it. Recognition is enhanced with the area under theMTF curve and identification performance is improved byincrementing the number of lines per scans per target.

Page 116: Field Guide to Infrared Systems

Infrared Systems 101

Infrared Applications

Applications for thermal sensing and thermal imaging arefound in almost every aspect of the military and industrialworlds. Infrared sensing offers high-quality passive and ac-tive night vision because it produces imagery in the ab-sence of visible light. It also provides considerable intelli-gence about the self-state of objects by sensing their self-emissions, indicating both surface and sub-surface condi-tions.

Infrared military systems enable on-board passive and ac-tive defense or offense capabilities for aircraft that coulddefeat or destroy incoming missiles, and providing effec-tive shield against adversary cruise and ballistic attacks.Other applications include target designation, surveillance,target discrimination, active and passive tracking, battle-space protection, asset protection, defense satellite control,warning devices, etc. Thermal offensive weapons are par-ticularly suited for those missions where precision, adjusta-bility, and minimum collateral damage are required.

Infrared observations in astrophysics are important be-cause its radiation penetrates more easily through the vaststretches of interstellar gas and dust clouds than does thevisible and ultraviolet light, thus revealing regions hiddento normal telescopes.

With the development of fast, high-resolution thermalworkstations, infrared thermography has become an im-portant practical non-destructive technique for the evalu-ation, inspection, and quality assurance of industrial ma-terials and structures. A typical approach consists of sub-jecting the work piece to a surface thermal excitation, andobserving perturbations of the heat propagation within thematerial. This technique is capable of revealing the pres-ence of defects by virtue of temperature distribution profileanomalies. It is attractive technique because it providesnon-contact, rapid scanning, and full-inspection coveragein just milliseconds. It can be used either for qualitative orquantitative applications.

Page 117: Field Guide to Infrared Systems

102 Infrared System Design

Infrared Applications (cont’d)

Some of these applications are as follows: Building diagnos-tics such as roofing and moisture detection; material eval-uation such as hidden corrosion in metals, turbine bladesand vanes blockage; plant condition monitoring such aselectrical circuits, mechanical friction and insulation, gasleakage, effluent thermal plumes; and aircraft and ship-board surveyors where power generating system failuresproduce signatures that can be detected with IR devices.

Infrared spectroscopy is the study of the composition of or-ganic compounds. An infrared beam is passed through asample, and the amount of energy absorbed at each wave-length is recorded. This may be done by scanning throughthe spectrum with a monochromatic beam, which changesin wavelength over time, or by using a Fourier transformspectrometer to measure all wavelengths at once. For this,a transmittance or absorbance spectrum may be plotted,showing at which wavelengths the sample absorbs the in-frared light, and allows an interpretation of what covalentbonds are present. Infrared spectroscopy is widely used inboth research and industry as a simple and reliable tech-nique for static and dynamic measurements, as well asquality control.

Page 118: Field Guide to Infrared Systems

Appendix 103

Equation Summary

Thin lens equations:

1f

= 1p

+ 1q

Gaussian xobjximg = f 2 Newtonian

Thick lens equation:

1feff

= (n − 1)

[1

R1− 1

R2+ (n − 1)t

n1

R1R2

]Lateral or transverse magnification:

M = −qp

= himg

hobj

Area or longitudinal magnification:

M2 = Aimg

Aobj=

(−q

p

)2

F-number and numerical aperture:

F/# ≡ feff

DenpNA ≡ n sinα

F/# = 1

2 tan(sin−1 NA)NA = sin

(tan−1 1

2F/#

)

F/# ∼= 12NA

paraxial approximation

Field-of-view:

FOVhalf-angle = θ1/2 =∣∣∣∣tan−1 hobj

p

∣∣∣∣ =∣∣∣∣tan−1 himg

q

∣∣∣∣FOVfull-angle = θ = d

fparaxial approximation

Diffraction-limited expressions:

ddiff = 2.44λF/# blur spot β = 2.44λ

Dangular blur

Refractive index:n = c

vLaw of Reflection:

θi = θr

Page 119: Field Guide to Infrared Systems

104 Infrared System Design

Equation Summary (cont’d)

Snell’s law:n1 sinθi = n2 sinθt

Fresnel equation (normal incidence):

ρ =(

n2 − n1

n1 + n2

)2

τ = 4n1n2

(n1 + n2)2

Internal transmittance

τint = φ(z)φinc

= e−αz

External transmittance:

τext = τ2e−αz = τ2τinternal

Reciprocal relative dispersion or Abbe number:

V = nmean − 1�n

= nmean − 1nfinal − ninitial

Relative partial dispersion:

P = nmean − ninitial

nfinal − ninitial

Solid angle equations:

� = 4π sin2 θmax

2

� ∼= ar2 a r2 paraxial approximation

Fundamental equation of radiation transfer:

∂2φ = L∂As cosθs∂�d

φ ∼= LAs cosθs�d finite quantitiesIntensity:

I = ∂φ

∂�d

∼= φ

�dI = ∂φ

∂�d= LAs cosθs

Exitance and Radiance:

M = ∂φ

∂As

∼= φ

AsM = ∂φ

∂As= πL Lambertian radiator

Page 120: Field Guide to Infrared Systems

Appendix 105

Equation Summary (cont’d)

Irradiance:

Eextended source = ∂φ

∂Ad= πL sin2

θ = πL4F2

/# + 1

Epoint source = φ

Ad× 0.84 = 0.84I�optics

π

4d2

diff

= 0.84I�opticsπ

4[2.44λ(F/#)]2

A� product or optical invariant:

As�d = Ad�s

Planck’s radiation law:

Me,λ = 2πhc2

λ5

1exp(hc/λkT) − 1

[Watt/cm2·μm]

Mp,λ = 2πcλ4

1exp(hc/λkT) − 1

[photon/sec·cm2·μm]

Rayleigh-Jeans radiation law hcλkT 1:

Me,λ ∼= 2πckTλ4

Mp,λ∼= 2πkT

λ3hWien’s radiation law hc

λkT � 1:

Me,λ ∼= 2πhc2

λ5 exp(

− hcλkT

)Mp,λ

∼= 2πcλ4 exp

(− hc

λkT

)Stefan-Boltzmann law:

Me(T) = σeT4 σe = 5.7 × 10−12 W/cm2K4

Mp(T) = σpT3 σp = 1.52 × 10−11 photons/sec·cm2·K3

Wien’s displacement law:

λmax,eT = 2898 [μm · K] λmax,pT = 3662 [μm · K]

Peak exitance contrast:

λpeak-contrast,eT = 2410 [μm · K]

Emissivity:

ε(λ,T) = Mλ,source(λ,T)

Mλ,BB(λ,T)

Page 121: Field Guide to Infrared Systems

106 Infrared System Design

Equation Summary (cont’d)

Kirchhoff’s law:

Integrated absorptance = α(λ,T) ≡ ε(λ,T)

= integrated emittance

Power Spectral Density:

PSD = N(f ) =F{cn(τ)} =∫ ∞

−∞cn(τ)e−j2πfτdτ

Noise-equivalent bandwidth:

NE�f ≡ 1

G2(f0)

∫ ∞

−∞|G(f )|2df NE�f = 1

2τsquare function

NE�f = 14τ

exponential function

Shot noise:

in,shot =√

2qi�fJohnson noise:

vn,j = √4kTR�f in,j =

√4kT�f

R

in,j =√

i2d + i2

L =√

4k�f[

Td

Rd+ TL

RL

]for cryogenicdetector conditions

1/f noise:

in,f =√K iα�f

f β

Temperature Noise:

�T2 = 4kKT2

K2 + (2πf )2C2

Responsivity (frequency, spectral, blackbody, andW-factor (see page 56)):

|Rv(f )| = v0τ√1 + (2πfτ)2

Rv(λ) = vsig

φsig(λ)Ri(λ) = isig

φsig(λ)

Page 122: Field Guide to Infrared Systems

Appendix 107

Equation Summary (cont’d)

Rv,e(λ) =Rv,e(λcutoff)λ

λcutoff

R(T) =Rv(λcutoff)

λcutoff

∫ λcutoff

0Me,λ(λ)λdλ

σT4

W(λcutoff,T) = Rv(λcutoff)

R(T)= σT4

1λcutoff

∫ λcutoff

0Me,λ(λ)λdλ

= σT4

hcλcutoff

∫ λcutoff

0Mp,λ(λ)dλ

Noise Equivalent Power (NEP):

NEP = vn

Rv= vn

vsig/φsig= φsig

SNR[watt]

Specific or normalized detectivity (D∗):

D∗ =√

Ad�fNEP

=√

Ad�fφd

SNR =√

Ad�fvn

Rv [cm√

Hz/watt]

D∗∗:

D∗∗ = sinθD∗

Photovoltaic detectors under BLIP conditions:

SNRPV =φe,sigηq

λ

hc√2q2φe,bkgη

λ

hc�f

NEPPV,BLIP(λ) =√

hcλ

2φe,bkg�fη

D∗PV,BLIP(λ) = λcutoff

hc

√√√√√η

2∫ λcutoff

0Ebkg(λ)dλ

= F/#λcutoff

hc

√√√√√ 2η

π

∫ λcutoff

0Lbkg(λ)dλ

Page 123: Field Guide to Infrared Systems

108 Infrared System Design

Equation Summary (cont’d)

Photovoltaic detectors under JOLI conditions:

SNRPV,JOLI =qηφe,sig

λ

hc

4k�f(

Td

Rd+ Tf

Rf

) ∼=qηφe,sig

λ

hc

4k�fTd

Rd

Rf � Rd

D∗PV,JOLI = λqη

2hc

√RdAd

kTd

Generation recombination noise:

in,G/R = 2qG√

ηEpAd�f + gth�f ∼= 2qG√

ηEpAd�f

in,G/R = √2in,shotG

Photoconductive detectors under BLIP conditions:

NEPPC,BLIP = 2hcλG

√EbkgAd�f

η

D∗PC,BLIP = λ

2hcG

√η

Ebkg

Photoconductive detectors under JOLI conditions:

NEPPC,JOLI ≡ ij

Ri,PC=

√4k�fT/Req

λqη

hcG

D∗PC,JOLI = λqη

2hcG

√ReqAd

kTPyroelectyric detectors:

Ri = AdRthpεω√1 + ω2τ2

th

Rv = AdRdRthpεω√1 + ω2τ2

th

√1 + ω2(RdCd)2

NEP = vjohnson

Rv=

√4kT�f

√1 + ω2τ2

th

√1 + ω2(RdCd)2

Ad√

RdRthpεω

D∗ =√

A3dRdRthpεω

√4kT

√1 + ω2τ2

th

√1 + ω2(RdCd)2

Page 124: Field Guide to Infrared Systems

Appendix 109

Equation Summary (cont’d)

Bolometer detectors:

Rv = αεvbias

2K√

1 + ω2τ2th

NEP =4K

√1 + ω2τ2

th

√kTRd�f

αεvbias

D∗ = αεvbias√

Ad

4K√

1 + ω2τ2th

√kTRd

Thermoelectric detectors:

Rv = ε�N

K√

1 + ω2τ2th

Scanning and Staring Systems:

SNR ∝ √number of sensor elements = √

nd

Range equation:

r =√

πDoptτopt

4F/#

√Iτatm

√D∗

√1

SNR√

�d�f

rBLIP =√

πDoptτopt

4

√Iτatm

λ

hc

√2η

πLbkg

√1

SNR√

�d�f

Noise equivalent irradiance (NEI)

NEI = 4F/#√

�d�fπDoptD∗τoptτatm

NEIBLIP = 4hcλAoptτoptτatm

4

√Lbkg�d�f

2πη

Modulation depth or contrast:

M = Amax − Amin

Amax + Amin

Page 125: Field Guide to Infrared Systems

110 Infrared System Design

Equation Summary (cont’d)

Optics MTF—calculations (square & circular exitpupils):

MTF(ξ,η) = Mimg(ξ,η)

Mobj(ξ,η)

MTF(ξ) =

⎧⎪⎪⎨⎪⎪⎩

{cos−1(ξ/ξcutoff) − ξ/ξcutoff

[1 − (ξ/ξcutoff)

2]}1/2

for ξ ≤ ξcutoff0 otherwise

Detector MTF—calculation:

MTFd(ξ,η) = sinc(dhξ)sinc(dvη)

MTF Measurement techniques:

Point spread function response:

MTF(ξ,η) = |F{PSF}|Line spread function response:

MTF(ξ) = |F{LSF}|Edge spread function response:

d(ESF)

dx= LSF

Bar target response:

CTF(ξf ) = Msquare-response(ξf )

Minput-square-wave(ξf )

CTF(ξf ) = 4π

[MTF(ξf ) − 1

3MTF(3ξf ) + 1

5MTF(5ξf )

− 17

MTF(7ξf ) + · · ·]

MTF(ξf ) = π

4

[CTF(ξf ) + 1

3CTF(3ξf ) − 1

5CTF(5ξf )

+ 17

CTF(7ξf ) + · · ·]

Random noise target response:

MTF(ξ) =√

PSDoutput(ξ)

PSDinput(ξ)

Page 126: Field Guide to Infrared Systems

Appendix 111

Equation Summary (cont’d)

Strehl intensity ratio (SR):

SR =∫∫

MTFactual(ξ,η)dξdη∫∫MTFdiff-limited(ξ,η)dξdη

Noise Equivalent Temperature Difference (NETD):

NETD = 4π

[F2

/#

√�f

D∗√Ad∂L/∂T

]

NETDBLIP = 2√

2√π

hcλ

[F/#

√�f

√Lbkg√

Ad√

η∂L/∂T

]

NETD = �TSNR

Minimum Resolvable Temperature Difference(MRTD):

MRTD ∝ ξt√

HIFOV · VIFOVMTF(ξt)

√τeye · τframe

× F2/#

√�f

D∗√Ad∂L/∂T

= K(ξf )NETD

MTF(ξf )

Johnson criteria:f√Ad

= 2rncycles

xmin

Page 127: Field Guide to Infrared Systems
Page 128: Field Guide to Infrared Systems

113

Bibliography

J. E. Greivenkamp, Field Guide to Geometrical Optics,SPIE Press, 2004.

M. Bass, Handbook of Optics, Vol. I & II, McGraw-Hill, NewYork, 1995.

E. Hecht and A. Zajac, Optics, Addison-Wesley, Massa-chusetts, 1974.

F. A. Jenkins and H. E. White, Fundamentals of Optics,McGraw-Hill, New York, 1981.

M. Born and E. Wolf, Principles of Optics, Pergamon Press,New York, 1986.

W. J. Smith, Modern Optical Engineering, McGraw-Hill,New York, 2000.

J. M. Lloyd, Thermal Imaging Systems, Plenum, New York,1975.

R. D. Hudson, Infrared System Engineering, Wiley, NewYork, 1969.

E. L. Dereniak and G. D. Boreman, Infrared Detectors andSystems, John Wiley & Sons, New York, 1996.

W. L. Wolfe and G. J. Zissis, The Infrared Handbook, In-frared Information Analysis (IRIA) Center, 1989.

G. D. Boreman, Fundamentals of Electro-Optics for Electri-cal Engineers, SPIE Press, 1998.

G. D. Boreman, Modulation Transfer Function in Opticaland Electro-Optical Systems, SPIE Press, 2001.

R. W. Boyd, Radiometry and the Detection of Optical Radi-ation, Wiley, New York, 1983.

R. H. Kingston, Detection of Optical and Infrared Radia-tion, Springer-Verlag, New York, 1979.

Page 129: Field Guide to Infrared Systems

114 Infrared System Design

Bibliography

R. J. Keyes, “Optical and infrared detectors,” Topics in Ap-plied Physics, Vol. 19, Springer-Verlag, New York, 1980.

W. L. Wolfe, Introduction to Infrared Systems Design, SPIEPress, 1996.

G. C. Holst, Testing and Evaluation of Infrared ImagingSystems, JCD Pubishing, 1993.

G. C. Holst, Common Sense Approach to Thermal ImagingSystems, SPIE Press, 2000.

J. D. Gaskill, Linear Systems, Fourier Transforms, and Op-tics, Wiley, New York, 1978.

J. W. Goodman, Introduction to Fourier Optics, McGrawHill, New York, 1968.

W. Wittenstain, J. C. Fontanella, A. R. Newbery, andJ. Baars, “The definition of OTF and the measurement ofaliasing for sampled imaging systems,” Optica Acta, Vol.29, pp. 41–50 (1982).

S. K. Park, R. Schwengerdt, and M. Kaczynski, “MTF forsampled imaging systems,” Applied Optics, Vol. 23, pp.2572–2582 (1984).

S. E. Reinchenbach, S. K. Park, and R. Narayanswamy,“Characterizing digital image acquisition devices,” Opt.Eng., Vol. 30(2), pp. 170–177 (1991).

A. Daniels, G. D. Boreman, A. D. Ducharme, and E. Sapir,“Random transparency targets for MTF measurement inthe visible and infrared,” Opt. Eng., Vol. 34(3), pp. 860–868,March 1995.

G. D. Boreman and A. Daniels, “Use of spatial noise targetsin image quality assessment,” (invited), Proceedings of In-ternational, Congress of Photographic Science, pp. 448–451,1994.

Page 130: Field Guide to Infrared Systems

115

Bibliography

A. Daniels and G. D. Boreman, “Diffraction Effects of in-frared halftone transparencies,” Infrared Phys. Technol.,Vol. 36(2), pp. 623–637, July 1995.

A. D. Ducharme and G. D. Boreman, “Holographic elementsfor modulation transfer function testing of detector arrays,Opt. Eng., Vol. 34(8), pp. 2455–2458, August 1995.

M. Sensiper, G. D. Boreman, and A. D. Ducharme, “MTFtesting of detector arrays using narrow-band laser speckle,”Opt. Eng., Vol. 32(2), pp. 395–400 (1993).

G. D. Boreman, Y. Sun, and A. B. James, “Generation ofrandom speckle with an integrating sphere,” Opt. Eng., Vol.29(4), pp. 339–342 (1993).

Page 131: Field Guide to Infrared Systems

116 Infrared System Design

Index

1/f noise, 52, 106�f , 46

Abbe number, 19absorption, 15, 21absorption coefficient, 15advantage, 72afocal systems, 10aluminum, 38angular magnification, 10A� product or optical

invariant, 26, 105aperture stop (AS), 6area or longitudinal

magnification, 4, 103assumption, 72astronomical telescope, 10autocorrelation, 44axial ray, 6

back focal length (b.f.l), 5background-limited infrared

photodetector (BLIP), 49,63

blackbody (BB), 30blackbody responsivity

R(T, f ), 56blur spots, 12bolometer, 42, 66bolometer detectors, 109brass, 38brick, 38brightness temperature (Tb),

39

carbon, 38cardinal points, 5cavity radiation, 30central-ordinate theorem, 84chief ray, 6cold shield, 11cold stop, 11cold-stop efficiency, 11color temperature (Tc), 40common path

interferometers, 92

concrete, 38contrast, 2contrast transfer function

(CTF), 88converging beam scanner, 70copper, 38cryogenic temperatures, 11current responsivity, 53

D∗∗, 61, 107D∗

PV,BLIP, 61detection, warning, or

go-no-go systems, 75detectivity, 42detector MTF—calculation,

110detector output voltage, 56Dewar, 11diameter, 7diffraction, 12diffraction limited, 12, 83diffraction-limited

expressions, 103digitization, 86dispersion, 19durable, protected, or hard

coated, 17dwell time (τdwell), 70

effective focal lengths, 5electromagnetic radiation, 1electromagnetic spectrum, 1emission, 21emissivity, 36, 38, 105enhanced, 17enlarging lenses, 9entrance pupil (Denp), 6erecting telescope, 10exit pupil (Dexp), 6exitance and radiance, 24,

104extended-area source, 28external transmittance, 16,

104

Page 132: Field Guide to Infrared Systems

117

Index (cont’d)

F-number and numericalaperture, 103

F-number (F/#), 7FF and FB, front and back

focal points, 5field lens, 11field stop, 8field-of-view (FOV), 2, 8, 103first and second principal

points Po and Pi, 5flux, 24flux collection efficiency, 2flux transfer, 8focal plane arrays (FPAs), 86footprint, 8frequency range, 1Fresnel equation (normal

incidence), 15, 104front focal length (f.f.l), 5fundamental equation of

radiation transfer, 104fundamental spatial

frequency ξf , 88, 95

Galilean telescope, 10Gaussian lens equation, 3generation-recombination

(G/R) noise, 50, 108glass, 38Golay cells, 42gold, 38good absorbers are good

emitters, 37

hard coated, 17human skin, 38

image irradiance, 28image quality, 2, 8, 80immersion lens, 11impulse response, 80index of refraction, 11infrared-imaging systems, 2

instantaneous field-of-view(IFOV), 70

intensity, 24, 28, 104internal transmittance, 16,

104iron, 38irradiance, 24, 105isoplanatism, 81

Johnson criteria, 79, 99, 111Johnson noise, 51, 106Johnson-limited noise

performance (JOLI), 61,63

Keplerian telescope, 10Kirchhoff ’s law, 37, 106knife-edge spread response

(ESF), 88

lacquer, 38Lambertian disc, 28Lambertian radiator, 25lateral or transverse

magnification, 4, 103law of reflection, 15, 103line-spread function (LSF),

87linearity, 81longitudinal magnification, 4lubricant, 38

magnification, 8marginal ray, 6material dispersion, 15mean, 43metals and other oxides, 38minimum resolvable

temperature difference(MRTD), 79, 95, 111

mirrors, 16modulation depth or

contrast, 109

Page 133: Field Guide to Infrared Systems

118 Infrared System Design

Index (cont’d)

modulation transfer function(MTF), 79

MTF measurementtechniques, 110

narcissus effect, 14NE�f , 46Newtonian lens equation, 3nickel, 38nodal points No and Ni, 5noise equivalent irradiance

(NEI), 78, 109noise equivalent power

(NEP), 57, 107noise equivalent temperature

difference (NETD), 79, 93,111

noise-equivalent bandwidth,46, 106

noise-equivalent power, 42nonmetallic materials, 38numerical aperture (NA), 7Nyquist frequency, 85

objective lense, 9oil, 38open circuit, 59optical aberrations, 8, 12optical axis, 3optical invariant, 26optical path difference

(OPD), 92optical transfer function

(OTF), 81optics MTF—calculations

(square & circular exitpupils), 110

paint, 38paper, 38parallel beam scanner, 70parallel scanning, 73paraxial approximation, 3peak exitance contrast, 105

phase transfer function(PTF), 81

phasing effects, 86photoconductive (PC)

detector, 62photoconductive detector

under BLIP conditions,108

photoconductive detectorunder JOLI conditions,108

photoconductor, 62, 63photodiode, 59photon detector, 42photons, 1photovoltaic detectors under

BLIP conditions, 107photovoltaic detectors under

JOLI conditions, 108photovoltaic (PV) detectors,

59Planck’s equation, 2Planck’s radiation equation,

34Planck’s radiation law, 105plaster, 38point source, 24, 28point-spread function (PSF),

87power spectral density, 106power spectral density (PSD),

44, 106primary and secondary

principal planes, 5protected, 17pyroelectric, 42pyroelectric detector, 64, 108

Ri, 53Rv, 53radiance, 25, 28radiation temperature (Trad),

39

Page 134: Field Guide to Infrared Systems

119

Index (cont’d)

radiation transfer, 25radiometry, 23range equation, 75, 109Rayleigh-Jeans radiation law,

34, 105reciprocal relative dispersion

or Abbe number, 19, 104reflection loss, 15refractive index n, 15, 103relative partial dispersion,

19, 104resolution, 2responsive time constant, 54responsivity (frequency,

spectral, blackbody, andK-factor), 42, 53, 106

reverse bias, 59rupture modulus, 15

sampling effects, 86sand, 38scan noise, 14scanning and staring

systems, 74, 109scattered flux density, 21search systems, 75Seebeck coefficient �, 69self-radiation, 2shading, 14shift invariance, 81short circuit, 59shot noise, 48, 106signal-to-noise ratio (SNR),

23, 49silver, 38Snell’s law, 3, 15, 104soil, 38solid angle equations, 104solid angle “�”, 22spatial frequency, 80spatial resolution, 79specific or normalized

detectivity (D∗), 58, 107spectral radiometric

quantities, 30

spectral responsivity R(λ, f ),55

stainless steel, 38standard deviation, 43staring systems, 74steel, 38Stefan-Boltzmann constant,

33Stefan-Boltzmann law, 33,

34, 105steradians [ster], 22stray radiation, 11Strehl intensity ratio (SR),

84, 111superconductors, 42

telecentric stop, 6telecentric system, 6temperature, 2temperature noise, 52, 106terrestrial telescope, 10thermal detector, 42thermal conductivity, 15thermal equations in

photon-derived units, 34thermal expansion, 15thermal imaging system

(TIS), 79thermal noise, 11thermal sensitivity, 79thermistors, 66thermocouple, 69thermoelectric detector, 69,

109thermopile, 42, 69thick lens equation, 103thin lens, 3thin lens equation, 103through-put, 26time delay and integration

(TDI), 72tin, 38transfer function, 80transmission range, 15type of radiation, 1

Page 135: Field Guide to Infrared Systems

120 Infrared System Design

Index (cont’d)

variance or mean-square, 43voltage responsivity, 53

water, 38water-erosion resistance, 15wavelength range, 1

white noise, 45Wien’s displacement law, 33,

34, 105Wien’s radiation law, 34, 105wood, 38

Page 136: Field Guide to Infrared Systems

Arnold Daniels is a senior engineerwith extensive experience in the devel-opment of advanced optical and electro-optical systems. His areas of expertiseinclude applications for infrared searchand imaging systems, infrared radiome-try testing and measurements, thermo-graphic nondestructive testing, Fourieranalysis, image processing, data acquisi-

tion systems, precision optical alignment, and adaptive op-tics. He received a B.S. in Electro-Mechanical engineeringfrom the University Autonomous of Mexico and a B.S. inElectrical engineering from the Israel Institute of Technol-ogy (Technion). He earned an M.S. in Electrical engineeringfrom the University of Tel-Aviv and received a doctoral de-gree in Electro-Optics from the school of Optics (CREOL)at the University of Central Florida. In 1995 he receivedthe Rudolf Kingslake medal and prize, which is awarded inrecognition of the most noteworthy original paper to appearin SPIE’s journal Optical Engineering. He is presently de-veloping aerospace systems for network centric operationsand defense applications at Boeing-SVS.