video camera calibration and radiometric...

104
VIDEO CAMERA CALIBRATION AND RADIOMETRIC CORRECTION by David John Evans B.Sc., University of British Columbia, 1983 THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE in the Department of Geography a David John Evans 1987 SIMON FRASER UNIVERSITY June 29th, 1987 All rights reserved. This work may not be reproduced in whole or in part, by photocopy or other means, without permission of the author.

Upload: others

Post on 03-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

VIDEO CAMERA CALIBRATION AND RADIOMETRIC CORRECTION

by

David John Evans

B.Sc., University of British Columbia, 1983

THESIS SUBMITTED IN PARTIAL FULFILLMENT OF

THE REQUIREMENTS FOR THE DEGREE OF

MASTER OF SCIENCE

in the Department

of

Geography

a David John Evans 1987

SIMON FRASER UNIVERSITY

June 29th, 1987

All rights reserved. This work may not be reproduced in whole or in part, by photocopy

or other means, without permission of the author.

Name : David John Evans

Degree: Master of Science

Title of Thesis: Video Camera Calibration and Radiometric Correction

Examining Committee:

Chairman: I. Hutchinson

A,C,B. Roberts Senior Supervisor

T,K. Poiker

-

K* Colbow Professor

External Examiner Department of Physics Simon Fraser University

Date Approved : 3 -L 2+7

PARTIAL COPYRIGHT LICENSE

I hereby grant t o Simon Fraser Un ive rs i t y the r i g h t t o lend

my thesis, p r o j e c t o r extended essay ( the t i t l e o f which i s shom below)

t o users o f t he Simon Fraser Un ivers i ty L ibrzry, and t o make p a r f i a l o r

s i n g l e copies on ly f o r such users o r i n response t o a request from the

l i b r a r y o f any o ther un ive rs i t y , o r other educational i n s t i t u t i o n , on

i t s own behalf o r f o r one o f i t s users. I f u r t h e r agree fha- t pe rmiss ion

f o r m u l t i p l e copying o f t h i s work f o r scho lar ly purposes m y ae granted

by me o r t h e Dean o f Graduate Studies. It i s understood t h a t c ~ p y i n g

o r pub l i ca t i on o f t h i s work f o r f i nanc ia l gain sha l l not be allowed

without my w r i t t e n permission.

T i t l e o f Thes i s/Project/Extended Essay

Videc Camera Calibration and Radiometric Correction ----.-_.

Author:

(signat%e)

David John Evans

( name

June 29, 1987

(date)

ABSTRACT

This thesis examines improvements in the radiometric accuracy of a video image using cam-

era calibration and image correction procedures. The objective of this study was to determine the

effectiveness of three different radiometric correction techniques. Two analog corrections were im-

plemented with an addition or multiplication procedure and were evaluated in terms of radiometric

quality against a test image corrected using a bilinear spatial interpolation analytical correction.

Since the objective of the study was to evaluate improvements in radiometric quality, geometric

corrections were not considered.

Video cameras introduce radiometric distortions into the images they produce. These distor-

tions are a result of image fall-off, vignetting and non-uniform spatial sensitivity across the photo-

sensitive surface. A historical review of video remote sensing examined these problems by focusing

on the calibration procedures and correction techniques used with video cameras mounted on space

probe, satellite, airborne and terrestrial platforms. Of the procedures reviewed, differentiation has

been made between analog and analytical methods of image correction.

Distortion modeling, a method of describing the magnitude and spatial extent of radiometric

distortions, is an important prerequisite if correction procedures are to be effective. The calibration

procedures used for this study consisted of nine images acquired at sucessively higher exposures.

These data were applied to a test image using each correction procedure and the resulting correc-

tion performance was evaluated. The result of this process was a comparison between 19 corrected

test images. It was concluded that an analog multiplicative correction procedure provided the most

effective correction and suffered least from the effects of random digital fluctuations introduced by

digitization. However, under some specific circumstances analytical correction procedures may be

superior. To date, computational antivignetting filters have not been adequately tested or evalua-

ted and on the basis of this research they may hold considerable promise.

ACKNOWLEDGMENTS

I would like to thank Dr. A.C.B Roberts, my senior supervisor, for his guidance and patience

exhibited throughout my research. The critical reading and comments of Dr. T.K. Poiker and Dr.

K. Colbow were very helpful and aided in resolving several conceptual issues. The expertise of Dr.

Steve Kloster (Computing Services, SFU) and Brian Radcliffe (Technical Services, Instructional

Media Center, SFU) made this investigation possible. Completion of this thesis was aided by the

comments of Pauline Pigeau, who resolved many of the structural writing problems. In addition,

the encouragement and assistance from members of the Geography Department was greatly ap-

preciated. Finally, financial assistance in the form of an Open Graduate Scholarship from SFU

made it possible for me to pursue my studies.

DEDICATION

To my Parents

TABLE OF CONTENTS

Approval ................................................................................................................................... ii ...

Abstract ................................................................................................................................... ill

Acknowledgments .................................................................................................................... iv

Dedication ................................................................................................................................ v

... List of Tables ......................................................................................................................... viii

List of Figures .......................................................................................................................... ix

Introduction .................................................................................................................... 1

1: 1 Video in Perspective ................................................................................................... 2

1:2 Context of The Study .................................................................................................. 4

Radiometric Distortions in Video Systems ......................................................................... 6

2: 1 Introduction .............................................................................................................. 6

2:2 Distortions in Video Images ......................................................................................... 8

2:3 Spacecraft and Satellite Platforms ..................................................... ................... 10

2:4 Airborne and Terrestrial Applications ..................................................................... 17

2:5 Summary ................................................................................................................. 26

Methods ......................................................................................................................... 27

3: 1 Camera Calibration .................................................................................................. 27

............................................................................. 3: 2 Rationale for Template Correction 32

3: 3 Multiplicative Correction .......................................................................................... 32

3: 4 Additive Correction .................................................................................................. 34

3:5 Bilinear Spa. tial Interpolation Correction ................................................................... 36

3:6 Linear Regression Analysis ..................................................................................... 38

......................................................... 3:7 Video Grey Scale and Contact Print Calibration 39

......................................................................... 3:8 Characterization of Digital Response 41

3:9 Summary ................................................................................................................ 42

Results .......................................................................................................................... 43

4: 1 Video Grey Scale and Contact Print Calibration Results ............................................. 43

4:2 Template Image Digital Response ............................................................................. 44

............................................................................................ 4: 3 Correction Performance 55

4:4 Predicted Contact Print Reflectivity .......................................................................... 59

4:5 Predicted Video Grey Scale Reflectivity ..................................................................... 62

4:6 Linear Regression Analysis ...................................................................................... 64

4: 7 Summary ................................................................................................................. 66

Discussion ..................................................................................................................... 67

5: 1 Correction Limitations .............................................................................................. 67

5:2 Analog Implementation of Each Template Correction ................................................ 79

5.3 Applicability of each correction ................................................................................. 81

Conclusions .................................................................................................................... 82

6: 1 Concluding Comments .............................................................................................. 83

Glossary ................................................................................................................................. 87

References .............................................................................................................................. 91

vii

LIST OF TABLES

Table Page

4.1 Calibrated video grey scale and contact print reflectivity expressed as a percentage of ......................................................................................................... incident light 45

4.2 Template image statistics showing the average digital response, standard deviation and range of digital intensities found in each template image .................................. 52

4.3 Average digital response of each corrected test image ..................................................... 57

4.4 Digital variability of grey step 1 (scale A) for the multiplicative and additive correction techniques ............................................................................................................ 59

4.5 Contact print reflectivity slope and intercept coefficients for each corrected test image ..... 60

4.6 Contact print reflectivity and predicted contact print reflectivity for each corrected test image ................................................................................................................... 61

4.7 Video grey scale reflectivity and predicted video grey scale reflectivity of each cor- rected test image .................................................................................................. 63

4.8 Linear regression analysis results showing the correlation coefficient, similarity dis- tance and corresponding corrected image ............................................................... 65

5.1 Illustration of the effect of random digital fluctuations on the correction performance of the analog multiplicative correction technique ................................. : ...................... 74

5.2 Illustration of the effect of random digital fluctuations on the correction performance of the analog additive correction technique ................................................................. 76

viii

LIST OF FIGURES

Figure Page

2.1 Idealized radiometric response at two locations in the video camera field of view ................ 9

3.1 Spectral sensitivty of VSP Labs Model SC500 solid state video camera ............................ 29

3.2 Transmittance characteristics for Kodak Wratten Filter No . 88A .................................... 30

4.1 Plot showing calibrated video grey scale reflectivity as a function of calibrated contact print reflectivity ................................................................................................... 46

4.2 Enhanced density gradient representation of digital response of the grey step 1 calibra- tion image ............................................................................................................ 47

4.3 Enhanced density gradient representation of digital response of the grey step 2 calibra- tion image ............................................................................................................ 47

4.4 Enhanced density gradient representation of digital response of the grey step 3 calibra- tion image ............................................................................................................ 48

4.5 Enhanced density gradient representation of digital response of the grey step 4 calibra- tion image ............................................................................................................ 48

4.6 Enhanced density gradient representation of digital response of the grey step 5 calibra- tion image ............................................................................................................ 49

4.7 Enhanced density gradient representation of digital response of the grey step 6 calibra- tion image ........................................................................................................... 49

4.8 Enhanced density gradient representation of digital response of the grey step 7 calibra- tion image ............................................................................................................ 50

4.9 Enhanced density gradient representation of digital response of the grey step 8 calibra- tion image ............................................................................................................ 50

4.10 Enhanced density gradient representation of digital response of the grey step 9 calibra- tion image ............................................................................................................ 51

4.11 Upper left diagonal digital response for each template image ........................................... 53

4.12 Lower left diagonal digital response for each template image ........................................ 54

4.13 Photograph of test image showing video grey scale A and B ........................................ 56

5.1 Average digital response as a function of calibrated video grey scale A reflectivity for each template image ............................................................................................. 68

5.2 Standard deviation of digital response as a function of calibrated video grey scale A reflecitivity for each template image ...................................................................... 69

5.3 Upper left diagonal digital response for a series of pixels located on a transect running from the upper left corner to the lower right corner of each template image . . . . . . . . . . . . 7 1

5.4 Lower left diagonal digital response for a series of pixels located on a transect running from the lower left corner to the upper right corner of each template image ....... .. . . . 72

5.5 Enhanced density gradient representation of digital response of template image one after correction by bilinear spatial interpolation ..................................................... 80

CHAPTER I

INTRODUCTION

Remote sensing can be defined as the collection of information about an object without being

in physical contact with the object and is restricted to the methods that record electromagnetic radi-

ation reflected or emitted from an object (Sabins, 1978). If the true characteristics of the object are

to be determined then this information must be recorded as accurately as possible. This is a diffi-

cult task since detectors invariably introduce distortions into the data that they collect. To reduce

the severity of these distortions a detector must be calibrated and the data corrected.

There are two types of calibration, absolute and relative. This investigation evaluated the

utility of analog radiometric image correction procedures using absolute calibration methods with a

computational procedure. For the purposes of this study an analog correction refers to an analytical

computational procedure that is analogous to the use of antivignetting filters on metric photo-

graphic cameras. An optical antivignetting filter is designed to be strongly absorbing in its central

area and becomes progressively transparent towards its circumferential area. It is used to improve

the radiometric uniformity of a photograph. The computational method of implementation provided

radiometrically discrete and spatially continuous correction coefficients, for each pixel in an image,

that were calculated from the original distortion patterns found in a calibration image. Absolute

calibration implies knowledge of the object in fundamental units and requires measurements in

which the comparison is n-iade with maintained standards. In practice one does not need to know

the absolute magnitude of the radiation from a scene but rather the relative magnitude of the radi-

ance of one scene element with respect to another, or the relative radiance of the same scene ele-

ment observed a t different times, or the magnitude of the spectral radiance in one wavelength band

. with respect to another. Thus, stability and repeatability with time are much more important than

traceability to a known source (Colwell, 1983, 366).

I: 1 Video in Perspective -7-

Digital image processing began with the space program. For these studies, focus was placed

on using vidicon cameras both as radiometers and mapping instruments. To satisfy program objec-

tives it was necessary to develop elaborate analytical calibration procedures to remove radiometric

and geometric distortions from the cameras. In this context, such an analytical correction refers to

a computational procedure that was based on the radiometric response characteristics of the video

camera. This correction used spatially and radiometrically continuous functions to compute the cor-

rect digital intensity for each pixel in an image. Malfunctions of the RBV systems on the Landsat

1, 2 and 3 satellites caused research emphasis in the 1970's and 1980's to be shifted to the

multispectral scanners on these satellite platforms. Nonetheless, video technology improved as a

result of competitive pressures from industial and home consumer manufacturers and now re-

searchers are in a position to utilize the improved geometric and radiometric characteristics of solid

state video sensors for their image processing and mapping problems.

The image processing technology now available has the potential to allow an approximation

of real time analysis of video data for contemporary applications. Initial airborne and terrestrial ex-

perimentation did not involve elaborate analytical calibration techniques since investigators were

generally unfamiliar with the attributes of video imagery. The results of contemporary investiga-

tions illustrate that basically the same radiometric and geometric problems existed with these sys-

tems as were identified in the space program. Current research has placed more emphasis on geo-

metric correction of video images because of the photogrammetric properties of corrected images

and the ease a t which they can be acquired. These are appealing alternatives to traditional

photogrammetric techniques. As more demands are being placed on the radiometric qualities of

. video images, more attention will be placed on radiometric correction. The end result of this evolu-

tion will be the development of techniques which allow the removal of geometric and radiometric

distortions quickly and efficiently.

The potential need for real time information in certain applications raises serious concern for

the usefullness of complex analytical solutions. These solutions are hardware intensive, require

powerful computers to implement and may not be cost effective for most applications. Therefore, a

trade off must be made between the effectiveness of radiometric corrections and the speed a t which

these corrections can be achieved. Initially, a middle ground between uncorrected and analytically

corrected images can be found with analog corrections and these may ultimately be more accurate

if video cameras are either sufficiently stable or predictable over time.

Due to the growing popularity of digital image analysis, more investigators are beginning to

use microcomputers for image interpretation since they are inexpensive and powerful; a variety of

digital image processing software packages are also available to extend their utility. However,

these hardware and software image analysis systems place little emphasis on image correction be-

fore enhancement and interpretation. Analog correction procedures employing specialized hard-

ware and software configurations may provide a low cost correction alternative for users who wish

to input data into computers with video cameras.

There are relative implementation advantages and disadvant.ages het,ween analog =d a m -

lytical corrections depending on the application. To address this issue, the purpose of this investiga-

tion was to examine whether or not analog corrections are superior to analytical solutions because

of their holistic nature. Because of their simplicity analog corrections unlike analytical procedures

can not take into account changes in camera sensitivity without re-calibration. Since airborne and

terrestrial systems are readily accessible to the user, analog calibrations and corrections can be

made and re-calibrated a t will. In contrast, analytical solutions are more flexible and can deal with

changes in sensitivity without re-calibration. Analytical corrections require more computational

time and space, and are most suited for remote platforms (ie. satellites) which are generally not

accessable to the user and must be corrected without re-calibration.

1:2 Context of The Study ----

The purpose of this investigation was to examine the hypothesis that the radiometric accuracy

of a video image may be enhanced by the removal of sensor system error with camera calibration and

analog image correction. For the purposes of this investigation system error has been defined as any

systematic disturbance that obscures or reduces the radiometric fidelity or quality of a video signal

(additional definitions can be found in the Glossary). Therefore, the objective of this study was to

determine the effectiveness of each of the additive and multiplicative analog and the bilinear spatial

interpolation analytical radiometric corrections. In general, radiometric correction should improve

image quality by removing systematic digital fluctuations that are caused by the lens optics and

electronic components of a video camera system.

The analog additive correction used an image template which characterized the spatial

radiometric distortions introduced into an image by the video camera-lens system. Correction coef-

ficients were produced by generating a negative image of an evenly illuminated grey step adjusted

so that its minimum digital intensity corresponded to zero. The adjusted negative template image

was added to other digitized imagery collected with the same camera-lens system on a pixel by

pixel basis (Roberts and Evans, 1986). This correction can be considered analogous to the use of

antivignetting filters on metric photographic cameras. I t is also similar to the old tradition in the

photographic arts of printing photographs with the original camera lens on the enlarger to negate

optical distortions introduced by the lens with a reversal procedure.

As an alternative analog procedure, the multiplicitive correction also involved the use of a

single template correction applied to every image. Characterization of spatial shading was obtained

by recording a single image of uniform intensity a t a light level within the imaging systems dynam-

ic range. This single frame served as a template for shading removal. The average digitized bright-

ness for the template image was computed and the intensity a t each pixel was ratioed to this aver-

age value. These ratios were then inverted and multiplied by pixel values in corresponding

positions in other digitized target images produced by the same camera system (Green, 1983). This

correction, in theory, is also comparable to the analog use of antivignetting filters on metric photo-

graphic cameras.

Both techniques made use of the same information contained in the template image in differ-

ent ways. The additive technique used the inverse of the difference in digital intensity between

each pixel and the darkest pixel in the template image. These values were then added to a test im-

age on a pixel by pixel basis in the final stage of the correction. The net result of this procedure was

to make pixels that were too dark, brighter. On the other hand, the multiplicitive technique used

the inverse ratio of the digital response around the average intensity of the template image. The re-

sulting correction coefficients were multiplied by the test image on a pixel by pixel basis. The result

of this correction was to make darker pixels brighter and, brighter pixels darker.

The calibration data set used to perform each template correction consisted of a sequence of

nine flat field template images recorded a t successively higher exposures. This procedure allowed

the relationship between input luminance and output digital intensity to be established for a given

exposure (see Figure 2.1). Each of the nine template images collected during this calibration proce-

dure were applied to a test image using both of the analog algorithms to yield a total of 18 corrected

test images.

In addition to the analog corrections, a more complex radiometric correction model was devel-

oped by utilizing all of the calibration images in an analytical solution (Green, 1983). The result of

this process was a table of correction coefficients that established the relationship between input

luminance and output digital intensity for each pixel in the test image. These data were used with

bilinear spatial interpolation to remove the radiometric distortion effects from the digitized

imagery. This interpolation provided modified digital intensity values at each pixel location which

were linearly related to radiometric intensity. The bilinear spatial interpolation technique is a com-

mon analytical correction procedure and was used in a comparison with the two analog corrections.

CHAPTER I1

RADIOMETRIC DISTORTIONS IN VIDEO SYSTEMS

2: 1 Introduction -

In response to increasing interest in the use of video equipment for remote sensing applica-

tions, several authors (Vlcek and King, 1984; Meisner and Lindstrom, 1985; Meisner, 1986) have

reviewed the advantages and disadvantages of video remote sensing. In a brief discussion of video

imaging systems for multispectral aerial surveys Vlcek and King (1984) focused on special imaging

and analysis problems related to multispectral video systems. They also mentioned that corrections

for shading, atmospheric effects, geometric distortions and time base errors may be important con-

siderations for some applications, but no details were provided. In another study, Meisner and

Lindstrom (1985) discussed the attributes of video systems relative to aerial photography. A recent

article by Meisner (1986) outlined the basics of video technology. Meisner pointed out both the ad-

vantages and disadvantages with respect to the user. He indicated the most important advantage

is immediate availability of the imagery but its major disadvantage is its low spatial resolution.

These attributes make video useful for some applications, but inappropriate for others. To cope

with these constraints he presented a variety of options for each component of an aerial video sys-

tem. Beyond describing hardware, he discussed the airborne operation of video equipment including

mounting configurations and image motion factors. These are important considerations if video is

to be used effectively for remote sensing applications.

A historical account of video remote sensing provides a description of space probes and

satellite platforms and is included to give an insight into the work that went into and the experience

. that resulted from two and a half decades of digital image processing in the space program. Many

lessons can be learned from these efforts which are applicable to the problems facing researchers

today. Although video technology has advanced rapidly over the past twenty-five years many of

the hardware considerations and calibration techniques employed with the old vidicon systems can

be applied with little modification to the present solid state video cameras.

Today the use of video cameras for documentary purposes is common and it is a logical result

that resource managers and industrial technologists with their near real time requirements for

overview or scientific imagery would begin to use portable video cameras and video cassette record-

ers for data collection. In general, investigators have enjoyed some success using video for remote

sensing applications. However, some problems do exist since, with a few exceptions (Hodgson et al.,

1981; Vlcek and King, 1984; Curry et al., 1986; El-Hakin, 1986; Roberts and Evans, 1986), little

emphasis has been placed on camera calibration and image correction. Of the work that has been

done, analytical solutions have generally been used for distortion modeling.

Analytical solutions require complex calibration procedures and correction algorithms. As a

result, they are hardware intensive, expensive, generally time consuming and are susceptible to

distortions from small errors introduced when modeling each variable in the solution. However,

when these corrections are suitably implemented with multispectral video imagery the resulting

uncorrected imagery may offer significant advantages relative t o colour!multispectra! aerial pho-

tography for applications that require a high degree of classification accuracy.

Radiometric correction and digital enhancement of video images can be obtained in seconds or

minutes when using, respectively, analog or analytical solutions. This is very fast relative to the

day or more that it takes aerial photography to be processed and interpreted. This difference in

time from data acquisition to final interpretation indicates digital techniques are superior for appli-

cations which require immediate information for the decision making process. Digital techniques

can be quite variable and the modes of implementation can be very different for each radiometric

. solution. Generally, analytical solutions use one or more continuous functions to perform corrections

on each pixel. In contrast, analog corrections are relatively simple since distortion patterns intro-

duced by different sources are represented by a single digital intensity. Because of these

fundamental differences, analytical corrections are much more complex in comparison to analog so-

lutions. This holistic approach for distortion modeling using an analog correction reduces the

amount of hardware and expense required for calibration and correction when compared to analyti-

cal techniques. Because of its simplicity an analog correction has the potential to remove distortions

quickly and may prove useful for applications that require video data to be processed in real time.

2:2 Distortions in Video Images - --

Video cameras introduce two major radiometric distortions into the images they produce

(Green, 1983):

1. The digital intensity values are not a linear function of the light intensity producing them,

and;

2. Each camera's response to different light levels is not spatially uniform (ie. fall-off, vignet-

ting and non-uniform spatial sensitivity across the sensor surface).

These two effects are illustrated, although the severity has been exaggerated, in Figure 2.1.

The two curves show the idealized digital intensity which result from imaging a linearly increasing

light intensity. Each curve is from a different pixel location within the camera field of view: curve

A is from the upper left corner; and curve B is from the lower right corner of the image.

In theory, a perfect sensor will respond to light by producing digital intensity values that are

linearly related to the light intensity being reflected from the target. However, as expected, most

video systems respond nonlinearly and this is the first major cause of radiometric distortion. At low

exposures digital response will increase a t an increasing rate relative to input light intensity and a t

high exposures digital response will increase a t a decreasing rate.

An important radiometric factor influencing exposure in video and photographic systems is

exposure fall-off. This effect is a variation in focal plane exposure associated with the distance an

LEGEPJD G PIXEL A 0 p I x L m

INPUT LIGHT INTENSITY

Figure 2.1: Idealized radiometric response a t two locations in the video camera field of view. Curves show the digital intensity that results from imaging a linearly increasing light intensity (Green, 1983).

image point is from the image center. Because of fall-off, a ground scene of spatially uniform

does not produce a spatially uniform exposure in the focal plane. Instead, for a uniform

ground scene, exposure in the focal plane is maximum at the image center and decreases with ra-

dial distance from the center (Lillesand and Kiefer, 1979: 359).

This systematic effect caused by fall-off is compounded by differential transmittance of the

lens and by vignetting effects in the camera optics. Vignetting is a result of internal shadows cast

by lens mounts and other aperature surfaces within the camera (Lillesand and Kiefer, 1979, 362).

2:3 Spacecraft and Satellite Platforms - -

The calibration techniques developed for the vidicon cameras utilized in the space program

were very detailed and required complex equipment to implement. Basically, these calibration pro-

cedures consisted of component calibration, subsystem calibration after assembly and in situ verifi-

cation of camera performance. The monitoring of camera performance while in space was very im-

portant for two reasons. First, the trauma of blast off placed severe forces on the camera assembly

and this generally changed the performance characteristics of the camera. Secondly, ambient envi-

ronmental conditions in space (eg. pressure, temperature) were variable and were very different to

those found on Earth. Under these different conditions camera sensitivity usually changed relative

to its initial performance on Earth. Because of these changes, calibration data collected on Earth

had to be modified, using additional information, to take into account changes in camera sensitivity

while the camera was functioning in space. Since camera performance was variable and there was

no way of retrieving the sensor to recalibrate it, analytical solutions were required to effectively

correct returned images. Analog corrections could not be used in this situation simply because the

distortion patterns changed and there was no way to accurately re-calibrate the cameras (analog

solutions do not have the capacity to take into account changes in sensitivity unless the camera is

recalibrated).

Ranger

The Ranger spacecraft, first launched in 1962, utilized vidicon cameras. These cameras were

subject to geometric distortion, radiometric nonlinearity, and noise contamination (Castleman,

1979). The use of preflight calibration images allowed radiometric and geometric correction, streak

removal and contrast enhancement. However, the first three Rangers failed to return pictures of

the moon and the following Ranger Four failed to turn on.

Mariner Mars 1964

Mariner 4 was launched on November 28, 1964, and passed Mars on July 14, 1965, sending

back 22 images. Calibration procedures included reseau removal, shading compensation (correc-

tions for fall-off, vignetting and non-uniform spatial sensitivity across the sensor surface) and other

additional corrections. Reseau position and camera shading were carefully measured before launch,

but sensitivity changed during the trauma of blastoff (Castleman, 1979). As a result, prelaunch

calibration data could not be accurately used since they required further modification. These data,

however, could not be collected since no internal calibration procedures had been used and since the

probe could not be retrieved.

Surveyor 1965

The surveyor spacecraft were lunar landers equipped with a vidicon camera, a soil sampling

scoop, and a variety of Lunar surface experiments. A total of five Surveyors landed on the moon

between 1965 and 1967 sending back 87,674 images. These images were successfully corrected

and employed to produce panoramic mosaics, photographic images, stereographic range images

(for distance measurements) and topographic maps. The calibration procedures were described in

Smokler (1968) and they consisted of light-transfer characteristics for black and white and colour

imagery, shading characteristics, modulation transfer functions and many others.

Mars Mariner 1969

As the Surveyor experiment was coming to a close, the Mariner 6 and 7 project was gaining

momentum. Each of the two spacecraft had a two-camera television system with identical picture

formats and electronics designed specifically for digital image processing. The cameras had two

basic modes of operation, one for the far encounter portion of the mission and the other for the clos-

est approach called the near encounter. The near encounter differed from the far encounter ar-

rangement by utilizing automatic gain control which determined the proper gain state for optimum

exposures.

The calibration of Mariner 6 and 7 consisted of component level calibration and subsystem

calibration (Danielson and Montgomery, 1971; Rindfleisch et al., 1971). Component level calibra-

tion measurements were performed on major camera components to determine their contributions

to the overall system performance. Included were measurements such as spectral transmission of

the lens, lens focal length, moduiation transfer functions, shutter exposure time, vidicon spectral

sensitivity, resolution and light transfer characteristics. After assembly, the co'mplete subsystem

was subjected to environmenta! tests to assure reliable and predictable operation while in space.

The entire camera calibration included measurement of the modulation transfer function of the lens

and the determination of many camera properties including gamma, dark current, shading and re-

sidual image characteristics.

The collection of radiometric calibration data for the analytical correction procedures con-

sisted of a series of flat field images generated by exposing each camera to a spatially uniform

scene a t a variety of accurately measured luminance values over the dynamic range of the system.

This calibration was repeated for each camera filter, gain state and over a range of temperatures.

During the mission, the camera noise characteristics changed making image restoration very

difficult. To confound this situation automatic gain control returned no data regarding its own be-

havior. This information had to be inferred from the images and it was concluded that precise

of the gain characteristics could not be obtained (Castleman, 1979). As a result, it

was impossible to radiometrically correct and compare one image with another because of exposure

differences.

Mariner Mars 19 71

Mariner 9 was a Mars orbiter that returned more than 7,300 images requiring geometric

and radiometric correction (Green et al., 1975). Significant radiometric distortions were introduced

into the imagery by the vidicon camera system as a result of several effects:

1. The camera response was nonlinear (i.e., digital intensity was not a linear function of the

light intensity producing them);

2. The camera response varied nonlinearly with temperature;

3. The camera response varied across the camera field of view;

4. The camera response varied as different spectral filters were utilized.

To compensate for these distortions a light transfer sequence calibration procedure was used

to generate a radiometric correction data file. The light transfer sequence consisted of nine uni-

formly illuminated flat field frames recorded a t successively higher exposures beginning a t the

dark current frame and ending with a frame for which some or all of the pixels were saturated. For

each transfer sequence, luminance was incremented from zero to a level sufficient to exceed the

system dynamic range in nine steps. At each step an image was recorded establishing the relation-

ship between input luminace and output digital intensity for each pixel. The resulting data were

combined into radiometric calibration data files for use with the analytical correction procedures.

The images used to construct these radiometric calibration files contained several types of

additional distortions. Therefore, it was necessary to correct the calibration images before using

them to construct the final calibration file. This additional processing included removal of periodic

and random noise by digital filtering, correction for geometric distortion, removal of reseau marks

and correction for residual image. After initial processing, implementation of the analytical

radiometric correction consisted of a pixel by pixel spatial interpolation in the radiometric calibra-

tion file to determine the corrected brightness for each pixel.

In addition to pre-launch calibration procedures, emphasis was also placed on in situ verifica-

tion of camera performance (Thorpe, 1973). This provided a confirmation of radiometric corrections

used to correct returned images. Young (1974) provided a detailed description of the methods used

for estimating the radiometric changes that had occurred.

Mariner Venus Mercury 1973

The Mariner 10 mission was the first multiplanet mission which used gravity assist from

Venus to reach Mercury and returned over 16,000 images of the Earth, Moon, Venus and

Mercury. According to Castleman (1979), exacting preflight calibration permitted analytical

radiometric and geometric correction to be performed more accurately than previous planetary

missions. However, beyond stating that the returned images were corrected, no further elaboration

on the calibration procedures was presented.

Viking 3 9 75

The Viking mission sent two orbiter and two lander spacecraft to Mars, each carrying two

vidicon cameras. The orbiter provided images for high resolution surface mapping and stereometry.

The lander missions were the first space probes to produce colour images. The Viking lander cam-

eras acquired data in six spectral bands (each was approximately 0.1 pm wide) spaced over the 0.4

wm to 1.1 Mm wavelength region for colour and near-infrared imaging.

Patterson et al. (1977) described three methods which were used to collect radiometric cali-

bration data for the Viking mission. As with the Mariner probes, both component and subsystem

calibrations were performed. First, radiometric response of each camera as a function of wave-

length, temperature, gain, and offset were determined by separate measurements of the

photosensor array, and optical and electrical components. Second, laboratory measurements were

acquired on the absolute radiometric response of the cameras with calibrated light sources and ref-

erence reflectors. Third, periodic measurements of the stability of this response were made by

using a light source internal to the camera.

To account for degradation in infrared sensitivity (caused by radiation damage from the

radioisotopic thermoelectric power source of the lander) an analytical model of camera response

was developed and fitted to the internal calibration data. This corrected infrared sensitivity to with-

in 3% of prelaunch conditions. Under such conditions an analog correction would not have been ap-

propriate since camera sensitivity changed while the cameras were mounted on their remote plat-

form.

Voyager 19 76

The Voyager project involved two advanced spacecraft which were used to successfully ex-

plore Jupiter and Saturn, and are still currently exploring interplanetary space. These spacecraft

contain a variety or" scientific experiments (Kohlhase and Penzo, 1977) including a modified version

of the vidicon camera designs that had been used on previous Mariner flights. Due to the length of

the mission, instrument life time was a major concern. Smith et al. (1977) stated that special

efforts were made to eliminate failures which would disable the cameras and to provide adequate

inflight camera calibration to improve the data reduction process (no further elaboration was giv-

en). The variety of flyby distances and target diameters necessitated two different optical systems

for the cameras. Both cameras had incorporated within them identical shutter assemblies and eight

position filter wheels. These shutter assemblies were similar to those used with photographic sys-

tems and allowed images to be taken at shutter speeds which are greater than the vidicon scan

rate. More precise exposures and sharper images can be obtained with these faster shutter speeds.

. In addition, the use of higher speed shutters improved resolution by reducing image blur resulting

from target movement.

Each of the three early Landsat Earth observing satellites carried a multispectral return

beam vidicon television system and a multispectral scanner. The return beam vidicon (RBV) was a

three camera television system with Landsat 1 and 2, and a two camera panchromatic system

with Landsat 3 utilizing conventional lenses and shutters with optical filtration (Colwell, 1983,

532). These cameras also suffered from significant shading effects (Clark, 1981). Bernstein (1976)

described analytical processing techniques that were used to correct Landsat RBV data. These

methods were similar to those used on the previous Mariner, Viking and Voyager missions.

However, instead of considering each pixel, their method mathemetically structured the RBV im-

ages into correction zones, where each zone had a unique radiometric correction table that was

used to analytically compensate for RBV errors. However, in-flight calibration verification proce-

dures detected changes in system performance. To compensate for these changes Fusco and

Zandonella (1981) used in-flight calibration lamp data to update the radiometric correction coeffi-

cients obtained from preflight instrument calibration. From these data 324 new analyticai

radiometric correction coefficients were generated for each image. Polynomial spatial interpolation

of the radiometric correction coefficients by means of a lookup table process determined the gain

and offset coefficients which were used to calculate the digital intensity for each pixel (NASDA,

1981; Tsuchiya et al., 1981).

Summary

This research on video remote sensing in the space program has provided the theoretical

foundation for this study. The pioneering efforts of these investigations illustrated that video cam-

eras were effective as remote sensing instruments. However, their utility was impeded by the

. radiometric and geometric distortions introduced into the images they produced. In general, these

distortions were variable and changed with input light intensity, temperature, sensor surface,

optical geometry and different spectral filters.

Since radiometric distortion patterns were variable, flexible analytical correction procedures

were needed to take into account changes in camera sensitivity. This was accomplished by model-

ling the distortion patterns with calibration procedures before launch and then monitoring the cam-

era performance while in space. The initial camera calibration procedures determined the behav-

iour of each component in the camera system with respect to the overall camera performance. In

addition, a light transfer sequence consisting of a sequence of nine uniformly illuminated frames

collected a t successively higher exposures was used to model the radiometric performance of the

camera. These data were combined into the necessary analytical radiometric correction procedures.

Equally important was the verification of camera performance. If the performance changed, the

correction procedures had to be modified to take into account these changes.

On the basis of these results, it was concluded that a light transfer sequence was necessary

to model the radiometric distortion patterns of the camera used in this study.

2:4 Airborne and Terrestrial Applications - -

Space probe and satellite image quality had to be determined as precisely as possible because

once in space the instrument could not be retrieved for further modification, adjustment or calibra-

tion. These cameras had a variety of peripheral equipment that allowed for optimal exposure con-

trol (eg. automatic gain control and shutters). In addition to the camera components, equipment for

video telemetry was used to transmit data to earth. Most of the data obtained from these systems

were used for mapping purposes, so great care was taken to restore the radiometric and geometric

characteristics of each image.

The industrial development of video technology resulted in significant improvements in video

sensor design. As opposed to vidicon cameras, charge coupled device (CCD) sensors offer stable

geometry and improved spectral sensitivity. For the most part these sensors have been designed

for home consumer and industrial applications where visual image quality is generally more

important than true scene reflectivity. These cameras adjust the digital intensities of each spectral

band so the image appears pleasing to the human eye. To adjust the apparent reflectivity of a

scene, radiometric resolution of the images is greatly reduced through the use of auto iris and auto-

matic gain control (ie. the exposure latitude is made very broad). If one of these images is analyzed

with a computer, misleading interpretations may be made. To counteract these problems, sensors

can be modified (auto iris removed and circuitry for automatic gain control modified) so that they

behave more like radiometers and can provide a more suitable sensor choice for some remote sens-

ing applications.

Single Camera Configurations

Under many circumstances temperature changes can have a considerable effect on the

radiometric performance of video systems (Green et al., 1975). In response to this problem,

Hodgson et al. (1981) developed and described an aerial video system with associated digitization

and data storage devices that controlled for temperature. This system utilized a CCD camera sensi-

tive from 0.4 to 1.1 pm and had incorporated within it a thermoelectric cooling device and a solid

state temperature sensor. These modifications maintained this black and white video camera a t a

predetermined temperature. When temperature is controlled variations in output from each ele-

ment of the imaging array can be predicted, unlike previous tube cameras. While operating the

array a t a fixed temperature, a simple two point calibration procedure (ie. linear regression) was

sufficient to establish dark current and gain parameters for each element in the array. To illustrate

the effect of temperature on camera performance the authors stated that the dark current doubled

for each 7 O C increase in operating temperature. Using a microcomputer a complete uncorrected

image, from a series of aerial images, was stored in six seconds, allowing a 74% overlap a t a speed

of 250 kts and an altitude of 21,000 feet. If an analytical calibration procedure for temperature ad-

justment had been used the microcomputer would not have been able to store the imagery within

the necessary time frame.

Thermal stabilization allowed the use of linear regression for distortion modeling. This proce-

dure utilized the straight line portion of the camera's digital response curve. Since a linear model

was employed, non-linear portions of the digital response curve were not considered. As a result,

extremely high and low exposures were not adequately modeled and corrected. However, a t appro-

priate exposures the digital response would fall along the linear portion of the digital response

curve and this made the correction effective. In addition to providing conditions that were suitable

for linear analytical solutions, thermal stability provided conditions suitable for analog corrections.

Therefore, it is concluded that a thermally stabilized solid state video camera with good linear re-

sponse characteristics would be the ideal sensor for applications that require the use of analog

computational antivignetting filtration for image correction on the basis of this study. An

antivignetting filter has the potential to negate in near real time the effects of image fall-off, vignet-

ting and non-uniform spatial sensitivity across the sensor surface.

Everitt et al. (1986) evaluated the performance characteristics of a SWIR video camera sen-

sitive to short wave infrared radiation (1.45 to 2.0 ~ m ) . The authors did not discuss any calibration

or correction procedures, but they concluded that this SWIR sensitive camera would be useful in

some remote sensing applications. Their investigation demonstrated that video cameras, sensitive

to areas outside the photographic spectrum, may be useful for certain specialized applications.

Although radiometric correction procedures have not been fully developed, a number of stud-

ies have successfully used airborne video imagery in resource management. Manzer and Cooper

(1982) made use of a black and white infrared sensitive video camera fitted with a silicon diode

array tube and a Tiffen 87 deep red filter to successfully distinguish potatoe blight on uncorrected

images. Edwards (1982) utilized a single band near-infrared video system, also fitted with a Tiffen

87 filter, to detect freeze damaged citrus trees and leaves using a visual analysis of the uncorrected

video data. The time to assess freeze damaged citrus was decreased from one day with conven-

tional photography to one hour with this video system. If a microcomputer with near real time

radiometric correction and classification capabilities had been employed, time to assess damage

could have been reduced to seconds or minutes. In another study, Gausman et al. (1983) described

a video system developed to demonstrate fundamentals of the interaction of near-infrared radiation

with plant leaves from a visual interpretation of uncorrected video images. It consisted of a port-

able video camera equipped with a near-infrared tube sensitive from 0.95 pm to 1.1 pm, a video

cassette recorder and a video monitor. If calibration and correction procedures had been used with

digital enhancement, the visual and machine interpretation capabilities of this system could have

been considerably enhanced.

In addition to the direct use of video systems for remote sensing purposes, video cameras are

now starting to be widely used as data input devices for computer systems. To demonstrate the

utility of digital analysis with photo interpretation, Lyon et al. (1986) described the measurement

of wind erosion damage on rangeland converted to farmland. A microcomputer system digitized

and enhanced uncorrect video images of the 35 mm aerial photographs used in this analysis. Since

this study did not require detection of subtle reflectivity changes, radiometric distortion patterns

that resulted from both the photographic and video cameras did not significantly impair digital an-

alysis. Subsequently, Gerten and Wiese (1987) employed video digitized 35 mm colour and colour

infrared aerial photographs to measure lodging (a symptom of foot rot disease where the stems rot

and fall) in winter wheat. They found video digitization measurements underestimated percent

lodging and yield when compared to visual interpretation of the original aerial photographs. The

authors suggested this underestimation was a result of fall-off and vignetting which impaired the

density slicing procedure used in the (uncorrected) image analysis. Had they corrected the distor-

tion patterns introduced by the video camera, the photographic distortions would still have been

present and probably would have somewhat biased the result.

El-Hakin (1986) described a system for automatic measurement of three dimensional coordi-

nates of object points using a CCD camera for photogrammetric purposes. Target image coordinate

measurements to sub-pixel accuracy were carried out and compared to a metric photographic cam-

era and a direct coordinate measuring machine. Automatic measurement of object coordinates were

achieved to 0.1 of the pixel size a t image scale. However, a t object scale this accuracy was about

ten times worse than the metric camera. Since the CCD camera had a very small format it re-

quired a much smaller scale to image an object of the same size compared to the metric camera (the

metric camera used a 23 by 23 cm glass plate to form the image and the sensor surface size in the

CCD camera was 6.6 by 8.8 mm). Because of the small size of the sensor surface, more accurate

measurements could be made on the larger format camera when the target completely filled the

camera field of view. This preliminary study was presented from both geometric and radiometric

points of view and demonstrated that more extensive evaluation of CCD cameras and their correc-

tions is required. El-Hakin subsequently suggested that an improvement in the modeling of system-

atic errors is needed and these efforts should result in better coordinate measuring accuracy.

Curry et al. (1986) described the analytical calibration of a charge injection device video cam-

era. In this study, the effects of fall-off and vignetting on each pixel were modeled with orthogonal

polynomials (three dimensional polynomials that characterize the shape of the distorted

radiometric surface rather than the shape of a response curve (two dimensional) as with polyno-

mial regression). However, use of polynomial corrections did not significantly improve radiometric

quality. Other radiometric adjustments included subtraction of dark current.

Single Camera, Multiple Band Array

In an effort to reduce the size of multispectral imaging systems and to improve spectral reso-

lution, Meisner and Lindstrom (1985) developed a colour infrared video camera designed to record

colour video images that simulate colour infrared film (sensitivity ranged from 0.5 to 1.1 ~ m ) . The

use of one camera to collect three spectral bands reduced problems associated with image registra-

tion when producing a composite false colour image. When using three cameras to produce three

. spectral bands it is often difficult to register one band upon another when constructing a composite

image. This is a result of unique geometric distortions and parallax that are introduced into images

collected by each camera. As a result, the same objects in each image will be reproduced in slightly

different locations. In another study Vlcek and Cheung (1984) gave examples of a video image

acquisition and analysis system based on a colour video camera sensitive from 0.4 to 0.7 pm. They

indicated that radiometric and geometric distortions should be corrected but provided no details.

The main advantage of a multiple band imaging video camera, is the ease with which three

band images can be acquired, recorded and displayed. Since three bands have been acquired with

one camera, image registration is not a problem. However, with these systems it is virtually impos-

sible to control the filtration of each spectral band unless the camera is modified internally.

Therefore, it is difficult to adjust the spectral sensitivity of each band for a specific application and

this reduces sensor utility. In addition, no attempts were made, in either of these studies, to deal

with radiometric distortions but clearly this would be the next logical step with perfect multiband

registration.

Multiple Camera Arrays

Although colour video cameras can be designed for multispectral use, output from video re-

corders generally results in a composite colour image and reduced spectral separation between each

individual band. Therefore, a combined colour imagein a video recorder is a spectral limiting compo-

nent in the imaging system. To reduce this effect, several researchers (Everitt and Nixon, 1985;

Richardson et al., 1985; Roberts and Evans, 1986) have designed multispectral monochromatic

video image acquisition systems that record the output from each video camera on separate video

cassette recorders. However, image registration problems may occur when producing composite

multiband images from these multiple camera configurations. Real time viewing and classification

of colour or false colour composite images would require very accurate bore sighting of the cameras

if images are to be registered with any degree of precision. Fortunately, software systems have

. been designed to deal with these problems but they can be time consuming and tend to reduce spec-

tral resolution because of the radiometric interpolation techniques employed (International Imaging

Systems, 1981). With these various interpolation techniques, each pixel is affected by the intensity

of its neighbouring pixels through a local averaging procedure. As a result, the intensity of each

pixel is partially a function of its neighbour's intensity in the original video image rather than being

a function of true scene reflectivity.

Everitt and Nixon (1985) utilized three optically filtered monochromatic video cameras, and

three video recorders to simulate colour infrared film (sensitive from 0.4 to 1.1 pm) for rangeland

management purposes. The results showed the potential of a false colour video acquisition system

as a tool to assist in rangeland evaluation and other applications which necessitate near real time

visual analysis. However, their video images suffered from image fall-off and vignetting. To coun-

teract these problems, the authors suggested the use of an optical (direct analog) antivignetting fil-

ter on the infrared camera to improve the radiometric uniformity of images collected with this cam-

era.

Antivignetting filters are used to improve the uniformity of exposure throughout a photo-

graph. To negate illumination fall-off, optical antivignetting filters are designed to be strongly ab-

sorbing in their central area and are progressively transparent away from the center of the filter.

From Everitt and Nixon's suggestion it appears that some investigators have been considering di-

rect analog photometric techniques (similar to those used in aerial photography) rather than using

analytical camera calibration procedures for image correction. The use of optical antivignetting fil-

ters with photographic systems has been very reliable because photographic films have spatially

uniform radiometric sensitivity. However, to be used effectively with video systems, an

antivignetting filter would have to match the spatial radiometric sensitivity of the camera. As a re-

sult, such a filter would have to be employed with a specific camera and a specific lens a t a specific

f-stop and therefore, should probably be camera, lens and f-stop specific. Unfortunately, a photo-

graphic antivignetting filter designed for photographic systems would not correct for camera spe-

cific non-uniform spatial sensitivity when used on a video camera. To adequately correct for distor-

tion patterns that are unique to each filter/lens/camera system a computational version of a direct

analog optical antivignetting filter would be necessary. Such computational antivignetting filters

have not been adequately tested or evaluated to date.

To combine the attributes of both colour and colour infrared imagery several researchers

(Nixon et al., 1985; Richardson et al., 1985; Vlcek et al., 1985; Roberts and Evans, 1986) have

experimented with four camera multispectral video systems. The use of four spectral bands in dig-

ital analysis allows more precise radiometric definition of the targets under consideration. This re-

sults in an improvement in the ability to distinguish between subtle spectral reflectivity differences

between targets that are spectrally similar. In essence this approach is based on a multiple varia-

ble classification procedure in which more channels result in a better classification.

Nixon et al. (1985) described an inexpensive multiband video system that provided

narrowband ( 0.03 pm) imagery within the visible and near-infrared region that was used for the

assessment of vegetation conditions and discrimination of plant species. One of the four cameras

was modified with a camera tube sensitive from 0.3 to 1.1 pm while the other cameras had sensi-

tivity from 0.4 to 0.7 pm. Two photographic cameras were also used with this system. The authors

did not describe any calibration or correction procedures in their discussion but: suggested that a

combination of video and photographic systems may be useful for some remote sensing applica-

tions. Generally, video systems have superior radiometric resolution relative to most photographic

films, whereas photographic films have better spatial resolution. The use of video for

radiometrically discriminating targets and photography for accurately defining spatial distributions

of these targets may be very useful for some applications since the two systems are complemen-

tary and it would not be necessary for the video cameras to have high photogrammetric accuracy.

Richardson et al. (1985) employed a four camera multispectral video camera system to

digitally distinguish weed from crop plants. The video data used for this analysis were not cor-

rected for radiometric and geometric distortions. However, images used for multidate analysis were

collected a t the same lens aperture (f-stop) and this exposure constant allowed a direct comparison.

When these images were taken a t different times but a t the same f-stop, the geometric and

radiometric distortion patterns in all the images were similar (even though slight differences were

present because four different cameras were used) and could be compared to one another. On the

basis of their results they subsequently suggested that more emphasis should be placed on camera

calibration and image correction. This would allow the radiometric values from images acquired a t

different f-stops to be directly compared and this would greatly increase the utility of the system.

Due to the high light sensitivity of video cameras (radiometric resolution), a broad f-stop range may

be required if comparisons between many different target conditions is necessary, especially with

narrow band filtration (eg. 0.1 ym). Since several f-stop exposures may be used, different distor-

tion patterns would be introduced into each image collected a t each f-stop. These images could only

be directly compared if the distortions were removed with satisfactory (analog or analytical) correc-

tion procedures.

Vlcek et al. (1985) utilized a four-band one recorder video data acquisition system designed to

operate in narrow spectral bands (eg. down to 0.01 ym) in the visible and near-infrared spectrum.

To mitigate radiometric distortions caused by fall-off and vignetting the authors suggested using

only the central portion of the video images for analysis (the central portion of the image was least

affected by these radiometric distortions and, thus, this technique approximated a radiometer-like

approach). This method does not correct for non-uniform spatial sensitivity across the sensor sur-

face which would still distort these central digital intensities. In this study they did not implement

any -radiometric corrections and did not evaluate sub-image performance.

Roberts and Evans (1986) described a four camera multispectral video system. Camera sen-

sitivity, fall-off and vignetting characteristics were examined and an analog image correction and

preprocessing procedure was outlined. Images collected with this system were compared to aerial

photography and Landsat MSS images by Roberts and Liedtke (1986) and Liedtke et al. (1986) in

their discussion of the airborne definition of suspended surface sediment and intertidal environ-

ments. Their results indicated that orbital data may be adequate for small scale studies, but both

aerial photography and multispectral video imagery provided superior identification of large scale

features. Radiometric corrections were not applied to the video data, instead they used average

central image values from each video image as if it were a non-imaging radiometer to evaluate

radiometric resalution of the suspended sediment targets.

2 5 Summary -

Initially, very detailed analytical procedures were developed ta restore the radiometric and

geometric characteristics of video images in the space program. After the use of video sensors on

space platforms and Earth orbiting satellites was discontinued and the broadcast industry had ex-

panded, new investigators began to see the advantages of industrial video systems for remote sens-

ing applications. However, these systems were generally unfamiliar to the users and, as a result,

radiometric and geometric problems began to appear; especially when computers were employed in

the interpretive process. From a review of these studies it is evident that as video remote sensing is

developing for terrestrial applications, calibration techniques are needed to overcome radiometric

and geometric problems.

In an attempt to mitigate some of the radiometric distortions found in video images, this

study evaluated two analog correction procedures against an analytical correction that was similar

to those used in the space progam. Since the objective of this study was to improve the radiometric

quality of a video image, geometric corrections were not considered. This significantly reduced the

calibration process since only radiometric problems were addressed.

CHAPTER I11

METHODS

Methods were employed to radiometrically calibrate and correct images produced by the

video camera. Radiometric adjustment of digitial intensity values were determined by the analog

multiplicative and additive corrections, and an analytical bilinear spatial interpolation correction.

Such procedures are system specific and methods of implementation will vary accordingly. Since

both video and microcomputer technologies are developing so rapidly, specific implementation pro-

cedures are probably less important than the general issues they address (ie. the relative superior-

ity of analog as opposed to analytical procedures for a given hardware configuration and applica-

tionj.

3: 1 Camera Calibration -

Many different types of video camera technology have been employed in remote sensing ap-

plications. However, lit.tle emphasis has beer, placed on developing suitable correction procedures

for near real time applications. In response to this problem, this study evaluated the effectiveness

of three different correction procedures. Images were taken with a VSP Labs Model SC500 video

camera. Radiometric correction was accomplished by adjusting the digital intensity of the test im-

age so that radiometric response would more closely approximate true target reflectivity. This task

was accomplished by calibrating the video camera, using the resulting calibration data to imple-

ment corrections and then comparing resulting corrected images to original target reflectivity.

Camera

The VSP Labs Model SC500 solid state video camera, utilizing charge coupled transfer tech-

nology (Collet, 1985), is small and compact having been designed for industrial applications. Two

interlaced fields were imaged a t a rate of 60 Hz under the control of an internal crystal controlled

oscillator which provided improved pixel positioning accuracy relative to other tube and solid state

video cameras. The electronic subassembly was attached mechanically to the bottom of the camera

housing in a fashion that made mechanical fine adjustment of focus possible. Specialized circuitry

output the video data as a smooth, spike free signal which allowed the user to select any sample

rate of data. For this study, video data was formatted to 512 x 512 pixels from 604 horizontal and

485 vertical scan lines. In addition to superior geometric resolution, relative to other solid state and

tube cameras, the sensor chip delivered a peak sensitivity of 90 milliamps per watt in the green

spectrum (Figure 3.1).

The camera also featured selectable gain control. Three different modes were possible: auto-

matic gain control; internal pre-set gain control, and; remote variable gain control. During opera-

tion the camera consumed up to 0.5 amp a t 15 volts DC. It weighed approximately 1.4 kg and

measured 7.6 cm high x 6.6 cm wide x 14.2 cm deep.

The camera was equiped with a Sony f11.8 variable focal length (12.5 - 75 mm) C-mount lens

set a t 40 rnm and fl1.8. To image in a macro environment, this lens was also used with a 10 mm

Paillard-Bolex C-mount. extension tube. -4 88-4 glass filter, which is transparent to the photo-

graphic infrared (Figure 3.2), was added to the lens to increase image contrast.

Calibration

There were two possible methods of collecting the calibration data. Camera exposure could

have been controlled by either adjusting light intensity illuminating the target being imaged, or

light intensity could have been held constant while exposure was controlled by varying target

albedo. Adjustment of the light intensity could have been accomplished in several different ways.

Light intensity may have been changed by either: altering input voltage to the light source;

through use of a multiple light array (ie. adding and subtracting lights of similar spectral quality);

or having an adjustable aperture in front of the light source. Several problems were encountered

when the effects of varying the light intensity with these different methods was considered. Wein's

WAVELENGTH I Nanometers)

Figure 3.1: Spectral sensitivity of VSP Labs Model SC500 solid state video camera showing digital response as a function of wavelength (Collet, 1985).

800 id00 I

1100 WRVELENGTH (Nanometers1

Figure 3.2: Transmittance characteristics for Kodak Wratten Filter No. 88A showing transmittance as a function of wavelength (Kodak, 1970).

30

Displacement Law states that as the temperature of the emitting body (the filiment) increases, the

wavelength of maximum emittance decreases (Oke, 1978). Thus, an increase in applied voltage to

a light source would result in a spectral shift to shorter wavelengths. This problem was

compounded by non-uniform sensitivity across the entire photographic spectrum (ie. different sensi-

tivities occurred a t different wavelengths). Therefore, varying light intensity with voltage changes

would not yield suitable results because of these effects. In addition, the use of a multiple light

array was not found feasible due to difficulties in keeping illumination even across the targets. A

variable aperture on each light was not appropriate because of equipment limitations among other

considerations (ie. changes in the diffuse nature of the illumination with changes in aperture diam-

eter).

For these reasons varying illumination was not considered. Instead, exposure was controlled

by varying target albedo. This was achieved by imaging each grey step of a PortaPattern video

grey scale calibration chart (see Figure 4.13). A limiting factor with this method was that minor

imperfections (eg. scratches, smudges) existed between each of the grey steps. However, these

abberations were not as significant as the problems encountered by varying light intensity.

The calibration procedure for this investigation was similar in several respects to those em-

ployed for the Mariner probes (Danielson and Montgomery, 1971; %ndfleisch, 1971; Green et al.,

1976; Green, 1983). The calibration data set used to perform each radiometric correction consisted

of a sequence of nine uniformly illuminated flat field frames recorded a t successively higher expo-

sures begining at the first grey step of the video grey scale and ending at the ninth step. For this

transfer sequence, luminance was held constant a t an intensity sufficient to exceed a video output

level of one volt when imaging the ninth grey step (digital intensities were produced from video out-

put levels ranging between approximately 0.33 and 1.1 volts DC). At each step a frame was re-

corded establishing the relationship between input luminance (light reflected from the grey step en-

tering the camera) and the output digital number a t each pixel in the calibration image. Each of

these digitized frames contained distortion patterns produced by the video camera a t a given

exposure (see Figures 4.2 to 4.10). The distortion patterns contained in each calibration image

were used for distortion removal from other images collected by the same filter/lens/camera sys-

tem.

3:2 Rationale for Template Correction - -

Preliminary examination of video imagery revealed severe image distortions that reduced

radiometric accuracy considerably. These distortions were a result of fall-off, vignetting and

non-uniform spatial sensitivity across the sensor chip. Because of these factors, different regions

within the camera's field of view responded differently to the same input light intensity (see

Figures 4.2 to 4.10).

While precise analytical radiometric corrections are possible and can improve radiometric ac-

curacy significantly, they are complex, expensive, time consuming, camera specific, very sensitive

to errors and are deemed impractical for near real time applications as they require a large amount

of computer memory and computing capacity. For near real time applications image data must be

corrected quickly, yet be sufficiently accurate to permit an improvement in radiometric quality.

The multiplicative and additive template corrections are two possible analog methods of correction

for these applications. Such, analog corrections offer simple alternatives to other more complex

analytical correction procedures (for example the bilinear spatial interpolation correction).

3:3 Multiplicative Correction -

Each template image produced by the calibration procedure provided a characterization of

. the spatial shading and was used for shading removal. To determine which template was the most

effective a t improving radiometric precision, each of the nine template images was applied to the

test image through the multiplicative algorithm (Green, 1983).

The first step of the multiplicative correction calculated the average digitized brightness of

the template image (AIt) and then ratioed the digitized intensity (It,ls) a t each pixel location to this

average value.

Thus,

%,ls =It,ls/AIt

where,

Rt,ls = ratio of template image t a t line 1, sample s;

It,ls = pixel intensity of template image t a t line 1, sample s;

AIt = average pixel intensity of template image t.

This provided a coefficient (Rt,is) for each pixel that represented the relationship between pixel in-

tensity (It ls) and the average digitized brightness (A13 of the template image. If It,ls was selected 1

near the corner of the template image where shading was the most pronounced, the value of It,ls

was low in comparison to AIt. When this pixel was ratioed to AIt, Rt,ls was less than one.

Conversely, when It,ls was selected near the center of the template image, % ls,was greater than

one. Therefore, when Rt ls was less than one, the pixel was too dark because of the shading effects 9

and if % is was greater than one, the pixel was too bright in relation to AIt. To make dark pixels

brighter and bright pixels darker the values of Rt ls were inverted (this in effect produced a nega- 9

tive of the shading characteristics of the template image) and multiplied by the pixel intensity val-

ues m corresponding positions in the test imagery.

Therefore,

where,

CMi,ls = corrected pixel intensity i in test image at line 1, sample s;

Bi,ls = test image pixel intensity i a t line 1, sample s.

If Rt,ls had a value less than one, and was inverted, its new value was greater than one. When this

was multiplied by Bi,ls, this pixel was made brighter since it existed in a relatively dark portion of

the image. The opposite was true when the inverted value of 1, was less than one. Y

3:4 Additive Correction -

The additive analog correction was fundamentally the same as the multiplicative analog cor-

rection except for its mode of implementation (Roberts and Evans, 1986). Correction coefficients

were added to the test image with this procedure. Each of the nine template images were applied to

the test image through the additive algorithm to determine which template image was most effec-

tive a t removing shading effects.

This correction procedure computed AIt of the template image and then subtracted It,ls from

AIt a t each pixel location of the template image, giving:

Dt,ls =AIt-It,ls (3.3)

where,

%,IS =difference between average intensity and each pixel intensity of the

template image t a t line 1, sample s.

This subtraction provided a coefficient (DtYls) that represented the difference between It,ls and AIt.

If It,ls was selected near the corner of the template image where shading was the most pro-

nounced, subtraction resulted in a positive Dt,ls. However, if It,ls was selected near the middle of

the image, Dt,ls was a negative number. Therefore, when Dt,ls was positive the pixel was too dark,

and was negative if the pixel was too bright.

In the next step Dt,ls was added to AIt, resulting in:

NIt,ls = 2AIt-It,ls

where,

NIt,ls = negative pixel intensity of template image t at line 1, sample s.

This procedure effectively produced a negative image of the original template image. As a result,

the areas of the negative template image where shading was the most pronounced were the

brightest, whereas, areas where shading was least pronounced were the darkest. However, when

correcting the test image, only the relative difference in intensity between the brightest and

darkest pixels was of interest because this range of digital intensities contained shading informa-

tion. Therefore, the intensity of the darkest pixel (DNIt) was subtracted from NIt,ls a t each pixel

location, giving:

NSPt,iS = 2AIt-It,ls-DNIt (3.5)

where,

NSPt,ls = negative of the shading pattern of the original template image t

a t line 1, sample s;

DNIt = darkest pixel intensity of the negative template image t.

This produced an image which characterized the negative of the shading pattern contained in the

original template image, but, was independent of the actual digitized brightness at each pixel loca-

tion.

To complete the correction, the negative digital brightness (NSPt,ls) of the shading pattern

was added to the test image on a pixel by pixel basis.

Therefore,

CAi,ls = 2AIt-Ittls-DNIt +Bi,is

where,

CAi,ls =corrected digital intensity i of the test image at line 1, sample s.

From this equation it is evident that dark areas in the test image became brighter while bright

areas maintained their orginal digital intensities.

3:5 Bilinear Spatial Interpolation Correction -

This correction was similar to those used in the space program and was developed by utiliz-

ing all of the calibration images (Green, 1983). The result of this process was a table of values that

characterized the digitized brightness of selected pixels in relation to the original input light inten-

sity. A variety of modeling techniques could have been used with this data to remove radiometric

distortion effects from the rest of the pixels in an image. For this study the set of digitized intensity

values obtained from the calibration process were characterized using linear regression.

Therefore,

where,

Di,ls= the digitized intensity i a t line 1, sample s;

Mls= slope of response function for a pixel a t line 1, sample s;

Ils= target intensity imaged a t line 1, sample s;

BOyls = intercept of response function a t line 1, sample s.

After applying linear regression, a set of MlS and B0,is coefficients were obtained for selected

pixels and stored as a radiometric correction file. From this equation a new linear function was de-

fined by solving Equation 3.8 for Ils. These coefficients were employed to compute the correct dig-

ital intensity value (CBills) for each pixel of the test image in the following manner.

CBi,ls = (Di,ls-Bo,ls)Q'% (3.8)

where,

CBi,ls=corrected digital intensity i a t line 1, sample s;

CJ = scaling factor.

51 was a scaling factor introduced to ensure that the corrected intensity values were scaled to fall

within eight bits of precision.

If every pixel was considered in the radiometric correction file, the number of data points re-

quired to correct the test image would have been very large. Fortunately, radiometric response

varied slowly as a function of position within the image. Therefore, it was possible to store the re-

quired information as a subset of pixels and to perform a bilinear spatial interpolation to calculate

the new correction coefficients a t each pixel location.

Bilinear spatial interpolation divided the image into squares that were 30 pixels wide. Slope

and intercept correction coefficients were stored a t the vertexes of these squares. This interpolation

distance resulted in 289 slope and intercept coefficients which were used to correct the test image.

For the purposes of the following generallized discusion the variable Xls represents the slope (MIS)

or intercept (Bo,ls) coefficients that were stored a t the vertexes of the image squares in a

radiometric correction file.

The process of radiometric correction involved an interpolation within the appropriate Xls

values for each pixel. Therefore, the values of Xis were found such that:

Xk- l,lssX~lssXk,ls

where,

Xls=slope or intercept coefficient a t line 1, sample s;

Xk- l,ls =first Xis coeficient;

Xk,ls = second Xis coefficient.

Radiometric correction was performed by considering each pixel in turn, line by line and sample by

sample. At each pixel location line 1, sample s, the square within which 1,s fell was determined.

Once the pixel address was found, the slope or intercept value was calculated by performing a lin-

ear interpolation twice in the line dimension and once in the sample dimension on the XIS coeffi-

cients (it could have easily been performed twice in the samples dimension and once in the line di-

mension).

Therefore,

=((Pls-Pk- l , l~) ' (~k , l s '~k- l,ls))(Xk,ls~Xk-l,ls)+Xk- 1,ls

where,

Pls=pixei address a t line 1, sample s;

Pk-l,ls=pixel address for first Xk- l,ls;

Pk,ls =pixel address for second Xk,ls.

The specific radiometric correction coefficients calculated by this interpolation method were

applied through Equation 3.9 to recalculate the corrected pixel intensity based on the uncorrected

digital intensity a t each pixel location.

3:6 Linear Regression Analysis --

To determine the most effective correction procedure, output from each correction was con-

verbd to video grey scale reflectivity using polynomial regression. These data were then compared

to the original reflectivity values of the calibrated video grey scale using linear regression analysis.

Polynomial regression involved the task of obtaining curvelinear regression lines of best fit to ob-

served distributions of data (see section 3:7). A Pearson product-moment correlation coefficient was

used as a similarity measure in this analysis. The correlation coefficient (R) can be thought of as a

shape measurement, in that it was insensitive to differences in the magnitude of the variables used

to compute the coefficient (Aldenderfer and Blashfield, 1984). Since the product-moment correla-

tion coefficient was sensitive only to the shape of the data scatter, it means that two digital profiles

could have a correlation of 1.0 and not be identical (ie. the two profiles did not pass through the

same points). Thus, a high correlation could occur between two profiles as long as the measure-

. ments of one profile were in a linear relationship to another. As a result, some information was lost

when the correlation coefficient was used, and it was possible that misleading results could have

been obtained if the effects of elevation (the distance between two data sets) and dispersion

(variability within one data set) on the data profile were not considered.

In an image analysis that utilized radiometric information for target classification, the spec-

tral response of targets, within the image, controlled classification success. If the spectral response

of similar targets scattered throughout the image was similar and other targets dissimilar, classifi-

cation accuracy would be high. However, if an image had a sufficient amount of radiometric distor-

tion, success of target classification would be lower because similar targets may appear

radiometrically different and different targets may appear radiometrically similar. Since relative

radiometric response usually determines classification accuracy, the absolute radiometric response

(elevation) would only be important if the video camera was used as a radiometer.

The dispersion effects may be important in this study if digital response increased while ex-

posure decreased. If this occurs and the linear regression analysis calculates a high R value,

misleading interpretations may be made. Therefore, the highest linear correspondence between

each corrected test image and the calibrated (video grey scale) reflectivity will result in the most ef-

fective correction procedure in the absence of biased exposure results.

3:7 Video Grey Scale and Contact Print Calibration -- -- -

A PortaPattern Model 001-27 9-step log chip video grey scale chart was used as a target for

all images in this analysis. This test pattern consisted of two diametrically opposed nine step video

grey scales (see Figure 4.13). Each grey step measured 3.5 x 6.0 cm and the overall dimensions of

the test pattern were 30 x 39 cm. The video camera calibration data were collected by imaging

each step of the lower video grey scale (scale A). The reflectivity of each grey step was determined

using an Eseco Superspeedmaster densitometer set in reflection mode.

A reduction in distance between the camera and video grey scale was necessary so that illu-

mination would be evenly distributed over each target. To aquire a test image in this environment

it was necessary to reduce the size of the video grey scale and to utilize a macro lens on the cam-

era. A test image was collected by imaging a 35 mm contact print of the video grey scale test pat-

tern. This print was made by photographing the test pattern with a Nikon F250 and a Micro Nikor

55 mm macro lens using Panatomic X film and D76 processing. The contact print was also cali-

brated with the densitometer.

The relationship between video grey scale reflectivity and contact print reflectivity was de-

termined using polynomial regression. Using this technique, video grey scale reflectivity was re-

gressed against contact print reflectivity. This allowed video grey scale reflectivity (GSq,) to be

predicted from contact print reflectivity (CPh,) (see Figure 4.1).

In this analysis the statistical package Midas was used to derive polynomial regression equa-

tions (Fox and Guire, 1976). The contact print reflectivity was the independent varaible and the

video grey scale reflectivity became the dependent variable. From the output a table was printed

containing the coefficient, its standard error, a T-statistic and an attained significance level for

testing the null hypothesis where the coefficient is zero. For each term in the moael its partial cor-

relation with the dependent variable, partialled on the other variables in the model, was ~ ! S Q can-

puted.

To aid in the selection of the degree of polynomial, a significance level of 0.05 was selected to

provide a 95% confidence level. At this significance level there was a 95% chance of not making a

type one error (a type one error has been made when the null hypothesis has been rejected when it

should have been accepted).

Therefore,

G S h s = a (CPhs ) + a (CPh,) + a , (CPhs ) + . .. + an(CPqs)n

where,

GSRvs=video grey scale reflectivity for scale v, step s;

CPqs=contact print reflectivity for scale v,step s;

a = regression coefficient.

3:8 Characterization of Digital Response - -

The effectiveness of each correction procedure was determined by comparing the digital in-

tensity values of each grey step for every corrected image. To accomplish this task an International

Imaging Systems (12S) System 500 general purpose image processing system built around the

Model 70 Image Processor and a VAX 111750 minicomputer was employed. The System 500 was

capable of performing a wide variety of tasks using both the hardware image processing capability

of the Model 70 and the general computing ability of the host computer. This image processing sys-

tem consisted of a library of programs for classifying, analyzing, enhancing, reading, writing,

measuring, correcting and performing various other processes upon images (International Imaging

Systems, 1981).

Since the caiibration images and the test image were not preprocessed to remove random

noise effects, individual pixel response could not be used to compare each test image. Instead, dig-

ital response of the camera was determined by calculating the average digitized brightness of each

grey step shown on the test images. This was achieved by using the Blotch function of the I Z S

System 500 software. This function defined irregular image subareas from which pixels were in-

cluded in the average (ie. each grey step was blotched out and the average pixel intensity was cal-

culated within this region). Once this area was known, the Statistics function was used to calculate

average digitized brightness within the blotch region. In addition to the average, the statistics func-

tion also calculated the standard deviation, mode, median, deciles, quartiles and counted the num-

ber of pixels within the blotched region (sample size for each averaged region ranged from approxi-

. mately 1500 to 3000 pixels).

The average digitized brightness (AISvs) for each grey step was compared to the contact

print reflectivity (CPhs ) using linear regression. This allowed C P h s to be predicted from AISvs.

Therefore,

CPRvs = (AISvs-Bo,vs)/M,s

where,

AISvs=average digitized brightness for scale v, step s;

B0,,,=intercept of response function for scale v, step s;

Mvs=slope of response function for scale v, step s.

These values were used with the polynomial equations to predict video grey scale reflectivity based

on average digital intensity for each imaged grey step.

3:9 Summary -

The effectiveness of the additive, multiplicative and bilinear spatial interpolation radiometric

correction techniques were determined from a comparison of the average digital response of each

corrected test image with the calibrated grey scale reflectivity. In the first step of the conversion of

digital response to video grey scale reflectivity, the average digital response of each grey step was

converted to contact print reflectivity. Next: to complete the conversion, predicted contact print

reflectivity was converted to video grey scale reflectivity. This allowed a direct comparison between

the reflectivity of each corrected test image and the calibrated video grey scale.

CHAPTER IV

RESULTS

The problems associated with uneven target illumination encountered during camera calibra-

tion were minimized by utilizing a small target (grey step) imaged with a macro lens on the video

camera. The rationale for this configuration was based on the fact that the smaller the target, the

less likely illumination would be uneven across the target. In addition to the calibration images, a

test image was acquired by imaging a 35 mm contact print of the video grey scale. A total of 19

corrected test images were produced from this original test image. To determine the effectiveness

of each radiometric correction, the digital response of each corrected test image was converted to

video grey scale reflectivity with a two step procedure. This was accomplished by converting digital

response to contact print reflectivity with linear functions, and then converting these resulting data

to video grey scale reflectivity with polynomial equations.

This chapter presents the calibrated video grey scale and contact print data and illustrates

the distortion patterns by examining template image digital response. A discussion of correction

performance details the conversion of digital response to video grey scale reflectivity. Finally, the

effectiveness of each radiometric correction was compared with linear regression analysis.

4: 1 Video Grey Scale and Contact Print Calibration Results -- -- -

It was impossible to accurately measure the amount of illumination being reflected from each

grey step with the equipment a t hand. Therefore, rather than converting digital response to illumi-

nation for the comparison of test images, digital response was converted to reflectivity. Reflectivity

. measurements were relatively easy to obtain using densitometric techniques compared to the diffi-

culties associated with the accurate measurement of illumination. When the light source illumina-

tion was held constant, the only variable controlling exposure was grey step albedo.

The calibrated video grey scale reflectivity values ranged from 3.36% to 63% for scale A and

3.2% to 63% for step B. Reflectivity characteristics of the contact print grey scale were slightly dif-

ferent (a complete listing of the calibrated reflectivity for each grey scale is shown in Table 4.1).

These reflectivity values ranged from 5.36% to 61.6% for scale A and was between 4.9% and

64.6% for scale B. The differences in reflectivity between the video grey scale and the contact print

grey scale were due to distortions introduced by the photographic process when the contact print

was made (radiometric distortions caused by fall-off and vignetting are also found in photographic

images).

An illustration of the relationship between video grey scale and contact print reflectivity is

shown in Figure 4.1. To quantify this relationship, polyrnonial regression was used to derive equa-

tions that allowed video grey scale reflectivity (GSRvs) to be predicted from contact print

reflectivity (CPqs) .

Thus, using the values in Table 4.1 with equation 3.11, scale A was characterized by:

and, scale B was described by:

This procedure was used to relate the performance of all test images back to the calibrated video

grey scale reflectivity characteristics. Using this regression analysis, the standard error for scale A

was 0.96% and was 1.5% for scale B at P10.05.

4:2 Template Image Digital Response -

The relationship between input luminance and output digital response of the camera was es-

tablished by collecting a sequence of nine template images recorded a t successively higher expo-

sures (ie. using increasingly higher albedo targets found on the video grey scale). These flat field

images were input into the V A X - I ~ S image processing system using the video digitizer. Figures

Table 4.1: Calibrated video grey scale and contact print reflectivity expressed as a per- centage of incident light for grey scales A and B.

Video Grey Scale Contact Print

4.2 to 4.10 are photographs of these images which have been enhanced with a histogram normali-

zation procedure to emphasize contrast differences (International Imaging Systems, 1981). They

illustrate the lack of spatial uniformity across the sensor. The central portion of each photograph is

the brightest and this area corresponds to the highest digital intensity values found in each tem-

plate image. In general, the larger the radial distance from the brightest area of the image the

lower the digital response (darker the image). This trend was caused by fall-off and vignetting, and

was most evident at higher exposures. Any variability that disrupted this radial trend may have

been a result of non-uniform spatial sensitivity across the sensor chip, and/or differential lens

transmittance, and/or random fluctuations associated with the digitization process, and/or slight

changes in the reflectivity across the original target (ie. the grey step).

The decrease in average digtal intensity and digital variability (standard deviation) with de-

creases in exposure is illustrated in Table 4.2. Reductions in average digital intensity occurred in

direct proportion to decreases in grey step albedo (see Figure 5.1). However, after consistent reduc-

tions in digital intensity the standard deviation of the digital response increased from step 7 to step

8 when a decrease was expected (see Figure 5.2, when exposure decreased the digital response

CONTACT PRINT REFLECTIVITY (Percent1

Figure 4.1: Plot showing calibrated video grey scale reflectivity as a function of caii- brated contact print reflectivity for grey scales A and B.

Table 4.2: Template image statistics showing the average digital response (mean), stan- dard deviation (SD) and range (min, max) of digital intensities found in each template image.

Template Mean SD Min Max

should also have-decreased). The cause of this is not fully understood, but it may be due to changes

in reflectivity across the eighth grey step of video grey scale A or random digital variability (sensor

noise).

The digital response along a transect running from the upper left corner to the lower right

corner of each template image is illustrated in Figure 4.11. In a similar fashion, Figure 4.12 shows

the digital response along a transect beginning a t the lower left corner of each image and ending a t

the upper right corner. These plots demonstrate in graphic detail the digital trends that occurred

across each template image. The increasing distances between the digital profiles from each tem-

plate image result from the increasing exposure. This trend shows that the digital response of each

template image closely followed the logarithmic change in reflectivity of the video grey scale.

Several distortions introduced by the camera were also illustrated by these data. Image

fall-off is shown by the reduction in intensity near the edges of each profile and this effect became

more exaggerated at higher exposures. Also, the shape of each profile is not symmetrical about the

center of the image. Factors such as vignetting, non-uniform spatial sensitivity across the sensor

chip, differences in lens transmissivity and random fluctuations in the digital response all

0

0 60 120 180 240 300 360 420 480

PIXEL LOCATION

LEGEND

Figure 4.11: Upper left diagonal digital response showing the digital response for a se- ries of pixels on a transect running from the upper left corner to the lower right cor- ner of each template image.

LEGEND

0 , I I 1 0

I I I 1 1 60 120 180 240 300 360 420 480

PIXEL LOCATION

Figure 4.12: Lower left diagonal digital response showing the digital response for a se- ries of pixels along a transect running from the lower left corner to the upper right corner of each template image.

contributed to these variable distortion patterns. At lower exposures the effect of non-uniform spa-

tial sensitivity across the sensor chip dominated, while fall-off and vignetting effects were less pro-

nounced. This response was variable and appeared to be compounded by randomly distributed dig-

ital fluctuations (system noise) around the average intensity of each template image.

\ The overall effect of these distortions was to provide images that were brighter a t the center

of each template image. The lowest digital response for each image occurred a t the end of the lower

left diagonal digital response profile. This indicated that the upper right corner of each image was

darker than any other corner.

Template images characterize the distortions that were introduced into each image collected

by the camera system and it has been demonstrated that these distortions vary with exposure. As

a result, high contrast images may have very complex distortion patterns that may be dependent

on the exposure of each element across the sensor chip and, hence, would require element specific

corrections for greatest accuracy.

4: 3 Correction Performance -

Template images created with the calibration procedure were applied to the test image

(Figure 4.13) through the multiplicative, additive and bilinear spatial interpolation correction

algorithms. Since each template image had different shading patterns, each correction procedure

provided different results. A total of 19 corrected test images were produced from the original test

image. Table 4.3 lists the average digital response of each grey step that appeared in each cor-

rected test image used for grouping in the linear regression analysis. Only the first four

multiplicative and additive test images were included in this data set. Test images 5 through 9

were not included in the analysis because of serious image degradation problems which resulted

from the correction process when used with template images acquired at these lower exposure set-

tings. These rejected images produced image contrast a t locations that should have otherwise been

Table 4.3: Average digital response of each grey step for those images included in the linear regression analysis. Test refers to the uncorrected test image while bilinear refers to the test image corrected by bilinear spatial interpolation. Note, M refers to the multiplicative correction and A corresponds to the additive technique. The number follow- ing M or A refers to the correction template used.

Scale A

Step 1 2 3 4 5 6 7 8 9

Test 161.237 151.301 126.337 96.878 68.245 46.598 38.143 31.962 29.044

Bilinear 159.175 147.366 118.199 85.853 56.519 34.193 26.627 20.65 17.552

M1 161.504 148.713 122.228 93.283 64.153 43.594 35.834 30.668 28.894

M2 157.924 147.675 121.28 91.517 63.457 43.804 36.028 31.647 29.438

A4 160.608 149.89 123.531 92.734 64.26 43.308 36.853 31.947 30.946

Scale B

Test

Bilinear

M1

M2

M3

M4

A1

A2

A3

A 4

flat. I t was thought that these distortion patterns were caused by the correction coefficients calcu-

lated from template images 5 through 9 and random digital fluctuations. The template images ac-

quired a t low exposures contained distortion patterns that approximated distortions introduced by

the non-uniform spatial sensitivity of the sensor chip rather than the distortions caused by fall-off

and vignetting.

It is also interesting to note that the average digital response for A1 and A2 increased from

step 8 to step 9 (an increase in digital response with a decrease in exposure) for grey scale A and

increased for A1 in grey scale B. This runs contrary to the reflectivity trend shown with the video

grey scale calibration data (see section 3:6 for the discussion on dispersion effects). The cause of

this is not known. It may be a result of systematic or random noise in either the template image or

the uncorrected video image which resulted in inappropriate digital intensity values when deter-

mining the negative shading pattern of the template image using Equation 3.7.

The data contained in Table 4.3 also show that the bilinear spatial interpolation correction

resulted in the largest range of digital intensity values when compared with the' other uncorrected

and corrected test images. This resulted from the rescaling procedure when the data was used with

equation 3.9. It was relatively difficult to make direct visual and quantitative comparisons with the

rest of the images because of this trend. To overcome this problem corrected test image data was

converted to video grey scale reflectivity before being compared to the original video grey scale.

Table 4.4 shows the mean and standard deviation (SD) of grey step 1 (scale A) for each of

the multiplicative and additive test images. The SD of images M1 to M4 were low and did not in-

crease appreciably with decreases in exposure. However, images M5 to M9 had a much higher SD

and these images had more contrast relative to M1 through M4. The SD of A1 to A9 were

- consistantly low and comparable to the M1 to M4, but inspection of the A series images revealed

serious visible contrast deviations in images A5 to A9. For these reasons images M5 to M9 and A5

to A9 were not used in the linear regression analysis.

Table 4.4: Digitial variability of grey step 1 (scale A) for the multiplicative and additive corrections showing the mean and standard deviation of digital response for each corrected test image. Note, M refers to the multiplicative correction and A corre- sponds to the additive technique. The number following M or A refers to the correction template used.

Multiplicative Correction Additive Correction

Image Mean SD Image Mean SD

4:4 Predicted Contact Print Reflectivity - -

Contact print reflectivity was predicted from average digital response using linear regression

equations. These equations were derived by regressing average digital response against calibrated

contact print reflectivity for both grey scales. Table 4.5 lists the slope and intercept coefficients

that were derived for each correction image. The linear regression model characterized these data

very well. The standard errors calculated with this procedure ranged from 2.0496% for M4 (scale

A) to 5.3791% for A1 (scale A) a t P I 0.05.

Contact print reflectivity values for the calibrated contact print grey scale and for each cor-

rected test image are shown in Table 4.6. The range of reflectivity values for each image was simi-

lar and this allowed easier visual comparison of the data. The calibrated contact print grey scale

reflectivity data had the largest overall range of values (see Print in Table 4.6). This suggested

that reflectivity information may have been lost during the imaging process when a comparison

Table 4.5: Contact print reflectivity slope and intercept coefficients for each corrected test image. These coefficients were used to convert average digital response to contact print reflectivity. Test refers to the uncorrected test image while bilinear refers to the test image corrected by bilinear spatial interpolation. Note, M refers to the multiplicative correction and A corresponds to the additive technique. The number follow- ing M or A refers to the correction template used.

Scale A

Image Slope

Test

Bilinear

M1

M2

M3

M4

A1

A2

A3

A4

Test

Bilinear

M1

M2

M3

M4

A1

A2

A3

A4

Scale B

Intercept

Table 4.6: Calibrated contact print reflectivity and predicted contact print reflectivity for each corrected test image. Print corresponds to calibrated contact print reflectivity while test refers to the uncorrected test image and bilinear refers to the test image corrected by bilinear spatial interpolation. Note, M refers to the multiplicative correction and A corresponds to the additive technique. The number following M or A refers to the cor- rection template used.

Scale A

step

Print

Test

Bilinear

M1

M2

M3

M4

A1

A2

A3

A4

Print

Test

Bilinear

M I

M2

M3

M4

A1

A2

A3

A4

Scale B

was made between the calibrated contact print reflectivity and the predicted contact print

reflectivity values of the uncorrected test image (see Test in Table 4.6). Therefore, it was concluded

that the video camera may not have recorded all of the reflectivity information contained in the

contact print and none of the correction procedures could completely recover the original reflectivity

range. As with the average digital response data, the reflectivity characteristics of A1 and A2

(scale A), and A1 of scale B increased a t low exposures when a decrease was expected. This same

trend was also noted in A1 of scale B.

4:5 Predicted Video Grey Scale Reflectivitz - -

Contact print reflectivity was calculated from the average digital response for each grey

step. These data were transformed into video grey scale reflectivity using polynomial equations

(section 4:l). Table 4.7 lists the video grey scale reflectivity values predicted by these equations.

The range of these values tended to be slightly larger than those of the contact print. However, as

opposed to contact print reflectivity data, the range of video grey scale A reflectivity was similar to

the average digital response data and was generally higher than scale B data. This suggested that

the lower half of the sensor chip was more sensitive to reflectivity changes. This conclusion is con-

firmed by the digital trends shown in Figures 4.2 to 4.10. These photographs show the highest dig-

ital intensities in the central lower half of each image. As with the digital response and the contact

print reflectivity data, the calibrated videogrey scale reflectivity of images Al , A2 (scale A) and A1

(scale B) increased with decreasing exposure. This trend may have been a result of random noise

which affected the correction coefficients generated from the additive template correction and these

may have been compounded by random digital fluctuations contained in the test image.

Table 4.7: Calibrated video grey scale reflectivity and predicted video grey scale reflectivity of each corrected test image. Scale corresponds to calibrated video grey scale reflectivity while test refers to the uncorrected test image and bilinear refers to the test image corrected by bilinear spatial interpolation. Note, M refers to the multiplicative correction and A corresponds to the additive technique. The number foliow- ing M or A refers to the correction template used.

Scale A

Step 1 2 3 4 5 6 7 8 9

Scale

Test

Bilinear

M1

M2

M3

M4

A1

A2

A3

A4

Scale 63

Test 54.02

Bilinear 56.38

M1 58.08

M2 57.8

M3 57.65

M4 56.55

A1 60.52

A2 57.81

A3 57.05

A4 55.47

Scale B

4:6 Linear Regression Analysis 7-

In order to determine the most effective correction procedure, a linear regression analysis

using correlation coefficients was performed on the predicted and calibrated video grey scale

reflectivity data. Therefore, with this technique, predicted video grey scale reflectivity was re-

gressed against calibrated video grey scale reflectivity for each test image to determine the amount

of linearity. The results of this analysis are shown on Table 4.8. The correlation coefficient @),

similarity distance (1-R) and corresponding images were ranked according to the shortest similarity

distance. The shorter the similarity distance, the more radiometrically similar a corrected test im-

age was to the calibrated video grey scale. Evidently, all corrected test images were highly corre-

lated and each correction technique improved the radiometric quality of the test image. On the

basis of this analysis, the multiplicative correction technique provided the closest approximation to

the calibrated video grey scale reflectivity characteristics when used with template image 1 (MI).

Because of the problems associated with increasing reflectivity values a t low exposures (see

discussion on dispersion, section 3:6), A1 and A2 of scale A and A1 of scale B were rejected as ef-

fective correction techniques even though their similarity distances (1-R) were relatively short. In

addition, the similarity distances of A1 and M3 were similar, so it appeared that the multiplicative

correction technique when used with the third template image (M3) was equally as effective as the

A1 correction on the basis of these results. However, consideration of the reflectivity trends of both

A1 and M3 suggested that M3 was a much more effective correction procedure at lower exposures

because A1 reflectivity increased with decreasing exposure, whereas, M3 reflectivity decreased

with decreasing exposure as expected.

Table 4.8: Linear regression analysis results showing the correlation coefficient (R), simi- larity distance (1-R) and rank of corresponding corrected image. These images were compared, using linear regression analysis, with the calibrated video grey scale reflectivity characteristics of each grey scale.

Scale A

Rank

Scale B

Image

M1

A 1

M3

M4

Bilinear

A4

M2

A 3

A2

Test

M1

A1

M3

M2

A2

M4

A3

Bilinear

A4

Test

4:7 Summary -

Each of the three correction techniques improved radiometric precision of the test image. On

the basis of the linear regression analysis it was concluded that the analog multiplicative correction

was the most effective a t restoring radiometric precision when used with template image one.

From an examination of template image digital response it was concluded that higher exposures

(0.9 to 1.1 volts video output) result in more severe radiometric distortions. Since template image

one was collected a t the highest exposure, it contained the most radiometric distorion when com-

pared with other template images. This suggested that within a high contrast scene (eg. the test

image) bright pixels were more distorted in relation to darker pixels. As a result, when template

image one was used with the multiplicative procedure, bright pixels were more precisely corrected

than darker pixels since the brighter pixels more closely followed the distortion patterns in the tem-

plate image. On the basis of this result, it was concluded that exposure is an important factor when

attempting to implement an analog correction. Therefore, images (or pixels) that are optimally ex-

posed will be more effectively corrected while images (or pixels) that are underexposed will be

evercerrected.

CHAPTER V

DISCUSSION

The digital trends described in Chapter 4 demonstrated that it was possible to improve the

radiometric quality of a video image using several different correction techniques. These methods

were not a s complex as those developed for the space program, but they offered improved image

quality. The most serious radiometric distortions affecting the images used in this analysis were a

result of image fall-off, vignetting and non-uniform spatial sensitivity across the sensor chip. In ad-

dition, random digital fluctuations also had a strong impact on correction performance with analog

correction procedures. Such random fluctuations are relatively common with these systems and

clearly must be considered in any practical correction procedure.

5: 1 Correction Limitations -

Digital enhancement and statistical analysis of the template images demonstrated that the

distortion patterns varied with exposure (Table 4.2 and Figures 4,2 to 4.12). To evaluate the h i -

tations of each correction algorithm, distortion patterns were evaluated in terms of systematic and

random components. The template images provided insight into the systematic distortions intro-

duced into the images by the video camera while a discusion of the correction procedures illustrated

the effects of random digital variability

Figure 5.1 shows that the average digital response of each template image increased in di-

rect proportion to video grey scale reflectivity. However, Figure 5.2 indicates that the standard de-

viation of each template image did not increase linearly with increases in calibrated video grey

. scale reflectivity. The standard deviation would increase linearly if only the systematic distortions

such as fall-off, vignetting, differential lens transmittance and non-uniform spatial sensitivity

across the sensor chip influenced digital intensity. However, this was not the case and clearly other

1 I I 1 1 15 30 45 60 75

VIDEO GREY SCALE REFLECTIVITY (Percent1

Figure 5.1: Average digital response as a function of calibrated video grey scale A reflectivity for each template image.

VIDEO GREY SCALE REFLECTIVITY [Percent)

Figure 5.2: Standard deviation of digital response as a function of calibrated video grey scale A reflecitivty for each template image.

6 9

distortions (ie. random noise) may have influenced radiometric sensitivity.

As another illustration of the systematic radiometric distortion effects, Figures 5.3 and 5.4

illustrate the digital camera response that resulted from imaging a linearly increasing light inten-

sity for two series of pixel locations (these were the same pixel locations and digital intensities used

in Figures 4.11 and 4.12). The curve derived from pixel 1,l had the lowest slope and, as a result,

appeared dark relative to other pixels in each template image. This curve was defmed from the

pixel intensities taken from the upper left corner of each template image. On the other hand, the

curve defined by pixel 240,240 corresponded to the center of each template image and was least

effected by fall-off and vignetting. This curve had the steepest slope and its digital intensities ap-

peared brighter relative to other pixels in each template image.

The effects of non-uniform spatial sensitivity across the sensor chip were not accurately illus-

trated by the relative slope of the digital response curve for each pixel. If each pixel in the sensor

chip had a different sensitivity then the slopes of each response curve would be parallel in the

absence of fall-off and vignetting. In this case darker pixels would have lowe'r intercept values,

whereas, hrighkr pixels wnuld have higher intercept vdues. Instead, the intercept values for each

curve tended to converge to the same intercept value because of the effects of fall-off and vignet-

ting. The curves illustrated in Figures 5.3 and 5.4 show that for a given input light intensity the

spread of digital intensities for each template image increased with exposure and this result agrees

with the standard deviation values shown in Figure 5.2.

In addition to the systematic deviations introduced by fall-off, vignetting and non-uniform

spatial sensitivity across the sensor chip, surface irregularities across each of the grey steps of

video grey scale A introduced grey tone variations into the correction templates. This produced im-

age fields that were not radiometrically flat. The different reflectivity patterns that resulted from

these grey tone variations also contributed to the differences in digital variability of each template

image.

LEGEND

I I I

1s 1

30 1

t5 60 75 GREY STEP REFLECTIVITY (Percent)

Figure 5.3: Upper left diagonal digital response for a series of pixels located on a transect running from the upper left corner to the lower right corner of each template image. Graphs show the digital response as a function of input light intensity based on target albedo.

LEGEND D PIXEL 1,480 0 PIXEL 120,360 A PIXEL 240.240 + PIXEL 360.120 x PIXEL 480.1

Figure 5.4: Lower left diagonal digital response for a series of pixels located on a transect running from the lower left. corner to the upper right corner of each template image. Graphs show the digital response as a function of input light intensity based on target albedo.

0 1 1 1 I 1 0 15 50 45 60 75

GREY STEP REFLECTIVITY (Percent)

To further compound these systematic distortions, random digital variability also degraded

image quality. In this study the video data were not preprocessed to remove random noise. This

was done deliberately so that correction performance could be evaluated under conditions that sim-

ulated applications (such as unsupervised classifications) which require fast and simple (ie. analog)

corrections a t nearreal time data frame rates of 1/30 second. However, it must be remembered that

the quality of video data would be improved by preprocessing video signals with a low pass digital

filter to remove random noise spikes.

Multiplicative Correction

The implementation of the analog multiplicative correction began with calculating the aver-

age digitized brightness of each template image. Next, the digital intensity a t each pixel location in

the template image was then ratioed to this average value. These ratio values were inverted and

multiplied by the pixel intensity in the same corresponding positions in other digitized imagery pro-

duced by the same camera system. When random digital fluctuations introduced by the digitization

process distorted the true intensity of some pixels in the template image, the correction ratios cal-

culated from these pixel intensities were biased in direct proportion to the magnitude of the random

deviation.

A comparison of the performance characteristics of the multiplicative correction when used

with a template image before and after digital filtration (to remove random noise) is shown in

Figure 5.1. This hypothetical example illustrates the effect of random digital fluctuations on correc-

tion performance. If the digital intensity of a pixel was higher than it should have been, the value

of the correction coefficient was underestimated. The result of this underestimation was to provide

pixel intensities that were lower than required to effectively represent the true target reflectivity.

Conversely, if the digital intensity was too low, the correction coefficient was overestimated and the

corresponding pixel intensities were made too bright. Fortunately, these deviations occurred a t

point locations and this tended to reduce their overall effect since they were randomly distributed

Table 5.1: Illustration of the effect of random digital fluctuations on the multiplicative correction performance using hypothetical test and template images. Comparison was made between a template image that contained random digital fluctuations and one preprocessed with low pass digital filtration. The correction coefficients were calculated from each template image and were applied to the test image using Equation 3.2. The amount of random noise alteration is shown in brackets and the corresponding correc- tion coefficients are shown with a *.

No random fluctuations

Template Image

Test Image

Template Image

Correction Coefficients

Corrected Test Image (original scene)

Random digital fluctuations

Test Image

Correction Coefficients

Corrected Test Image

over any local area of the image. However, any deviations present in the correction coefficients

would tend to further distort the random fluctuations found in the test image. Therefore, the com-

bined effect of random distortions present in both the correction coefficients and the test image

could have been amplified in the final corrected image.

Additive Correction

Random digital fluctuations had a relatively greater impact on the additive correction per-

formance in relation to the multiplicative technique. Instead of ratioing the average digital inten-

sity of the template image to each pixel value as with the multiplicative procedure, pixel intensity

was subtracted from the average digitized brightness of the template image with the additive tech-

nique. Random fluctuations in the digital response had a strong impact on the correction coeffi-

cients produced by this procedure. The effect of random distortions is shown in Table 5.2. As illus-

trated by this hypothetical example, if the pixel intensity was too large, the difference between the

pixel intensity and average digital intensity was overestimated in bright areas and underestimated

in dark areas of the template image. On the other hand, if the pixel intensity was too low the differ-

ence was overestimated in dark areas and underestimated in bright areas (compare with Table

5.1). Even though potential problems may exist a t each pixel location, the net result of these

deviations was to produce correction coefficients that were adequate for simple corrections because

the pixel fluctuations would be randomly distributed.

In addition to affecting the correction coefficients for individual pixels, random digital fluctua-

tions also affected the absolute magnitude of digital intensities found in the negative shading pat-

tern. The algorithm for this correction isolated the darkest pixel in the negative template image.

This value was subtracted from all other pixel intensities to produce the negative shading pattern.

If a random fluctuation provided a pixel intensity that was too bright and was the brightest in the

original template image, this pixel defined the darkest pixel in the negative template image.

Because the digital intensity of this pixel was too low, every pixel intensity in the negative shading

Table 5.2: Illustration of the effect of random digital fluctuations on the additive correc- tion performance using hypothetical test and template images. Comparison was made between a template image that contained random digital flucuations and one preprocessed with low pass digital filtration. The correction coefficients were calculated from each template image and were applied to the test image using Equation 3.7 to yield a corrected test image. The amount of random noise alteration is shown in brack- ets and the corresponding correction coefficients are shown with a *.

No random fluctuations

Template Image

Test Image

Template Image

Correction Coefficients

Corrected Test Image (original scene)

Random digital fluctuations

~ e s t Image

Correction Coefficients

Corrected Test Image

pattern image was overestimated (made brighter) by a value that was equal to the magnitude of

the original random fluctuation. Therefore, when the negative shading pattern was applied to an-

other image, each pixel except for the one that defined the darkest pixel was overcorrected or made

too bright (see Table 5.2). Fortunately, this effect was systematic and was applied equally over the

entire image (except for the pixel corresponding to the darkest pixel in the negative shading pattern

which remained uncorrected). Since this correction was based on subtraction and addition, random

deviations present in both the template image and test image may cumulate in the final corrected

image. This problem could have been reduced with digital filtering using a low pass filter designed

to remove high frequency values in a preprocessing procedure.

Bilinear Spatial Interpolation Correction

Random distortions also affected the digital response of this analytical correction. However,

these distortions were not as serious as those encountered with either of the analog corrections.

The linear regression model which was employed to quantify digital response reduced the effect of

random deviations by defining the best fit straight line through individual pixel response of the

template images. As such, random deviat.ic?ns which distorted the pixel response were remnved

since the new pixel intensity was calculated from the best fit regression line and not the individual

pixel response. However, random distortions still biased the final corrected image because these

distortions were also found in the uncorrected test image.

There were several different interpolation methods which may have been used to character-

ize the digital response when performing this radiometric correction. The choice of a particular

technique was dependent on a trade-off between the desired precision of the correction and the

computational cost of performing the correction. For this study, digital response was characterized

by a linear function (Figures 5.3 and 5.4). However, Green (1983) indicated that digital intensity is

not a linear function of the light intensity. The curve shown in Figure 2.1 is similar to the photo-

graphic characteristic curve that describes the photographic characteristics of a film for a given

development (Lillesand and Kiefer, 1979, 338). This function increases a t an increasing rate a t low

exposures and increases a t a decreasing rate a t higher exposures. At mid range intensities, the dig-

ital response is linear. When this curve was compared to the digital response curves defined for this

investigation (Figures 5.3 and 5.4) it was concluded that the full dynamic range of digital intensi-

ties for the camera were not completely defined (since no nonlinear sections of the curve existed a t

high or low exposures). This suggested that the digital response of each curve defined by the VSP

camera corresponded to the straight line section of Green's idealized curve. Therefore, to more ac-

curately define the dynamic range, more grey steps would have to be used in addition to the dark

current during the calibration procedure. The resulting data would not follow a linear pattern with

these additional exposures. Because the function would be nonlinear, logistic or polynomial equa-

tions would be required to more accurately define the full range of digital response (Yates, 1974).

These functions would require more computational effort than a linear characterization of the dig-

ital response. However, aerial remote sensing images are generally low contrast and if the camera

exposure is correctly adjusted these exposures would fall on the straight line portion of the

nonlinear digital response curve and therefore would satisfactorily approximate a linear function.

Whatever function was utilized to describe the digital response, the correction performance of

the bilinear technique would have been improved by increasing the spatial density (number) of the

correction coefficients applied to the test image. For optimum correction every pixel should be con-

sidered and, in this case, the number of correction coefficients would have been very large. Because

of the prohibatively large number of possible correction coefficients (262,140 pixels to consider for

a one band 5 12 by 5 12 image) only a subset of the total number were used (or would be practical in

many applications). Several different methods could have been applied to spatially interpolate the

missing correction coefficients that were required for each pixel. Bilinear spatial interpolation was

convenient and simple to use, but global polynomials or other local operators could also have been

utilized instead. These non-linear functions may be more precise because digital intensity does not

necessarly vary linearly between adjacent pixels.

The general radiometric performance of the bilinear correction is illustrated by Figure 5.5.

This photograph shows the image that resulted when the interpolation procedure was applied to

template image one (compare with Figure 4.2). The radiometric precision of the image was im-

proved, but a series of correction artifacts resulted from the interpolation process. These artifacts

were a result of the nonlinear changes in digital intensity between adjacent pixels and were

accentuated by the relatively large spacing between the correction coefficients in the radiometric

correction file (30 pixels). If this interpolation had calculated the exact slope and intercept coeffi-

cients that would be required to completely restore the radiometric precision of the image, this im-

age would have no contrast and would be entirely flat. Artifacts such as these are more visible in

low contrast scenes, but would still exist in high contrast scenes even though they would not be vis-

ible.

5:2 Analog Implementation of Each Template Correction - --

Correction coefficients were determined from the digital intensity values contained in each

template image for both the analog multiplicative and additive techniques. Once these were knnwn

they could have been applied to any number of images with a single multiplication or addition on a

pixel by pixel basis. A more efficient processing configuration is possible using image processing

hardware which can perform pixel by pixel manipulations simultaneously (such as those used by

the 12S Model 70). This may be accomplished by creating a negative of the shading pattern with a

sign change procedure and then treating these stored correction coefficients as a separate image.

This image would then be multiplyed by or added in parallel to other images collected by the same

camera system for final correction.

Applicability -- of each correction

Random digital fluctuations seriously reduce the precision of each analog correction. The ac-

curacy of the correction coefficients would be improved by removing random digital fluctuation with

low pass digital filtration. Aside from the problems associated with random noise, analog correc-

tions hold considerable promise. As pointed out in Chapter 1, the issue was to investigate the rela-

tive superiority of analog versus analytical corrections rather than determining a specific mode of

implementation. There are probably many different ways of implementing each correction and

these will vary depending upon hardware and software configurations. However, this investigation

demonstrated that analog corrections may be feasible depending on the type of remote sensing ap-

plication (please see discussion for more details). When the sensor is accessible, the instrument can

be mission-calibrated and in this situation analog corrections may be most appropriate. In contrast,

when the sensor is not accessible, specific calibration procedures are required and corresponding

analytical solutions are necessary.

CHAPTER VI

CONCLUSIONS

Video cameras introduce radiometric distortions into the images they produce. These distor-

tions are a result of image fall-off, vignetting and non-uniform spatial sensitivity across the sensor

chip. Contemporary investigators have identified, and some have addressed these problems with

limited success. For example, Vlcek and Cheung (1984) and Gerten and Wiese (1987) stated that

image fall-off and vignetting seriously distorted the radiometric characteristics of their video im-

ages. In addition, Richardson et al. (1985) demonstrated that a direct comparison could not be made

between video imagery that had been collected a t different f-stops. These are serious problems

which impede machine interpretation when video images are analyzed with computers.

In response to these problems investigators have developed both analog and analytical image

correction solutions. As an example, Curry et al. (1986) and El-Hakin (1986) developed complex

analytical correction procedures that required detailed calibration techniques. Using less complex

procedures, Hodgson et al. (1981) also described an analytical method of correction using a custom

built camera that was thermally stabilized.

A different approach by Vlcek et al. (1984) and Liedtke et al. (1986) involved pseudo radiom-

eter solutions. Vlcek et al. (1984) suggested that only the central portion of each video image should

be used for sub-image machine interpretation. The rationale for this conclusion was based on the

fact that the effects of fall-off and vignetting are virtually non-existent at the center of an image

and become progressively worse towards the peripheral areas. This approach was also followed by

Liedtke et al. (1986) and may be convenient for some applications but much usable data could be

lost since it does not address problems associated with non-uniform spatial sensitivity across the

sensor surface. In response to this problem, Everitt and Nixon (1985) pointed out that the use of

(direct analog) photographic antivignetting filters would reduce the effect of image fall-off.

However, this technique does not correct for detailed vignetting effects or non-uniform spatial

sensitivity across the sensor surface, since both of these effects are filter, lens, f-stop and camera

specfic. Therefore, a photographic antivignetting filter offers improvement even though it does not

completely solve the problem.

As an alternative, Roberts and Evans (1986) have suggested an approach that combines the

attributes of anitvignetting filters with specific radiometric distortion patterns determined by cam-

era calibration procedures. Their method essentially created a computational antivignetting filter

which mitigated the effects of image fall-off, vignetting and non-uniform spatial sensitivty across

the sensor surface.

This investigation represents an extension of preliminary work by Roberts and Evans

(1986). In this study three different radiometric correction procedures were analyzed in an attempt

to determine the effectivness of such a computational antivignetting filter.

6: 1 Concluding Comments -

The objective of this st,udy was ?Q determifie the effectiveness of two analog correction tech-

niques in relation to an analytical correction to determine the utility of using an analog approach to

develop computational antivignetting filters. A computational antivignetting filter must correct for im-

age fall-off, vignetting, non-uniform spatial sensitivity across the sensor surface and random noise

if any reasonably high level of digital classification accuracy is required. These radiometric distor-

tions can seriously reduce the information content of a video image when a computer is used in the

interpretive process.

Developing suitable correction algorithms and collecting camera calibration data for test im-

age correction was a major part of this research. Three alternative radiometric correction methods,

all based on camera calibration, were implemented. Not considered were other geometric phenom-

ena which affect the overall spatial organization of objects within an image. Results showed that

the performance of each correction depended on the method of implementation, the nature of the

calibration and the test image quality. The analog multiplicative correction provided the most effec-

tive results and suffered least from the effects of random digital fluctuations introduced by the

digitization process. The analog additive correction also improved image quality. Random digital

fluctuations caused by the digitization process seriously hindered the performance of this correc-

tion. The analytical bilinear spatial interpolation correction was the least effective of the three cor-

rections but it also improved the radiometric characteristics of the test image. The performance of

this correction could have been improved by increasing the number of correction coefficients that

were applied to the test image and would probably have exceeded the multiplicative correction with

a high enough density of correction coefficients. The fundamental difference between the analog

and analytical corrections was that the analog correction represented the distortion patterns with

one discreet digital intensity for each pixel, whereas the analytical correction characterized all the

distortion patterns contained in each pixel with a continuous linear function.

On the basis of this study it has been determined that a template image obtained a t a video

output level of one volt (template one) provided the most effective correction coefficients. From an

analysis of the multiplicative correction performance, it was concluded that bright pixels were more

precisely corrected than darker pixels because the bright pixels contained distortion patterns which

more closely followed the distortions contained in the high intensity correction template. The darker

pixels were less distorted, and since they were less effected they may have been over-corrected.

Camera exposure is obviously of critical importance since images that are underexposed

would tend to be over corrected. For an analog correction to be effectively used, optimum exposures

must be obtained and the correction templates must be f-stop, lens and filter specific. Since aerial im-

ages are generally of low contrast, this correction would be amenable to aerial applications. If it

proves to be difficult to maintain optimum exposures and if exposure nonlinearity prevails, analyti-

cal corrections would be more precise. As a result, an analytica2 correction may be useful under con-

ditions where high contrast scenes are compared or when absolute reflectivity characteristics of a

scene are important.

An analog correction is relatively simple when compared to most analytical corrections.

Because of its simplicity this correction can be run on inexpensive hardware using several possible

modes of implementation. The analog algorithms used for this study performed pixel manipulations

sequentially and required approximately 3.5 minutes of host computer CPU (VAX 111750) for im-

age correction. To improve the efficiency for near real time applications, the algorithms would have

to be slightly different and more computer hardware would be required. For example, after the cor-

rection coefficients for each pixel have been determined and stored in permanent memory they

could be treated as an image. Using this configuration, each image could be added or multiplied to

other images in near real time by performing all pixel manipulations in parallel rather than sequen-

tially. This method of correction would probably use hardware and software similar to those found

in the 12S Model 70 digital image processing computer. As a result, video data could be corrected

and stored on video discs a t video frame rates of 1/30 second with this method.

Finally, specific logistical considerations with respect to video camera cdibration are neces-

sary if other investigators are going to persue analog correction procedures. During the calibration

process it is recommended that a variable aperture should be used to control intensity of the light

source illuminating the target during camera calibration. Digital variability caused by surface ir-

regularities on the flat field targets would be eliminated since only one target would be used with

this configuration. This would improve the characterization of the distortion patterns in the tem-

plate images. In addition to the dark current, a wider range of exposures are needed to define the

complete range of digital response. These additional exposures are necessary to define the extent of

the linear section of the digital response curve. This would determine the effective range of digital

intensities that an analog style correction could accomodate. In addition, the performance charac-

teristics of each analog style correction should be evaluated after the calibration and test images

have been preprocessed with low pass digital filtration to remove the effects of random digital fluc-

tuations. This should result in an improvement in the accuracy of the correction coefficients used

for image correction.

Since most video cameras become unstable with changes in temperature it is recommended

that a thermally stabilized solid state video camera with good linear response characteristics be

used to collect images that are going to be corrected with computational antivignetting filters.

Thermal control stabilizes the digital response characteristics of the camera over time and this

reduces the frequency a t which the camera must be recalibrated. Finally, the methods and feasibil-

ity of developing dedicated image processing hardware similar to the I S Model 70 should be un-

dertaken. If pixel manipulations can be performed in parallel, then near real time image correction

and storage may be possible. This would be desireable since some remote sensing applications re-

quire digital information to be corrected and enhanced as it is being acquired.

GLOSSARY

Absolute Radiometric Response - the major performance characteristics of the camera which re- lates the irradiance on the detector to the signal output.

Albedo - the ratio of the amount of electromagnetic energy reflected by a surface to the amount of energy incident upon it.

Analog Correction - an analytical computational procedure that is analogous to the use of antivignetting filters on metric photographic cameras. It is also similar to the old tradition in the photographic arts of printing photographs with the original camera lens on the en- larger to negate radiometric and geometric distortions introduced by the lens with a rever- sal procedure.

This correction provided radiometrically discrete and spatially continuous correction coefficients for each pixel in an image that were dependent on the magnitude of the distor- tion patterns found in a template image. To be effectively used, an analog radiometric cor- rection must be camera, f-stop, lens and filter specific. Specific implementation procedures can be found in sections 1:2, 3:3 and 3:4.

Analytical Correction - an analytical computational procedure that was based on the radiometric response characteristics of the video camera. I t used spatially and radiometrically contin- uous functions to compute the correct digital intensity for each pixel in an image.

In this study the radiometric corrections linearly related the input light intensity to the output digital response for each pixel in the photosensitive array in the camera. Bilinear spatial interpolation was used to compute radiometric correction coefficients that corrected for variations in radiometric response across the camera field of view. Specific computational procedures can be found in sections 1:2 and 3 5 .

Anitvignetting Filter - an optical filter designed to be strongly absorbing in their central area and progressively transparent in their circumferential area. It is used to counteract the effects of fall-off and vignetting.

Auto Iris - a mechanical device that automatically changes the apperature setting on a lens to ob- tain an optimum exposure.

Atmospheric Effects - the atmosphere can have a profound effect on the intensity and spectral com- position of radiation available to any sensing system. These effects are caused by the mechanisms of atmospheric scattering and absorption.

Automatic Gain Control - electronic circuitry that maintains the output voltage of the video sensor a t a constant level when scene contrast is changing.

Bit - a binary digit that is an exponent to the base two. The term is extended to the actual repre- sentation of a binary digit in different forms (eg. a magnetic spot on a recording surface).

Bore Sighting - the alignment of the axis of one remote sensing system with the axis of another system. The resulting images are geometrically registered to one another.

Case - a variable in a quantitative expression such as an equation.

Calibration Lamp - the act or process of comparing certain specific measurements in an instrument with a standard.

Charge Couple Device - a type of solid state video camera.

Classification - the process of assigning individual pixels of a multispectral image to categories, generally on the basis of spectral reflectance characteristics.

Contrast - the relative change between the reflectance of the brightest and darkest parts of an im- age.

Contrast Enhancement - improving the contrast of images by digital processing. The original range of digital values is expanded to utilize the full contrast range of the recording film or dis- play device.

Dark Current - the background digital response of a video camera that is determined by taking an image with the lens cap in place.

Densitometer - an instrument designed to measure optical density by shining light on to or through prints or film transparencies.

Density Slicing - the process of converting the continuous grey tone of an image into a series of den- sity intervals, or slices, each corresponding to a specific digital range.

Digital Filter - a mathematical procedure for removing unwanted values from numerical data.

Digital Intensity - the value of target reflectance recorded for each pixel.

Exposure Latitude - the range of exposure values that will yield an acceptible image.

Fall-off - variation in focal plane exposure purely associated with the distance an image point is from the image center. Exposure in the focal plane is a t a maximum a t the center of the image and decreases with radial distance from the center.

Field - each frame of a video image is divided into two fields. The first contains the odd scan lines and the second all even scan lines output from a video camera.

Flat Field - a target which has the same reflectivity characteristics across its entire surface.

Focal Length - the distance of the focal point from the center of a lens node.

Frame - a video image that is composed of two interlaced fields.

Gain - a general term used to denote an increase in signal power in transmission from one point to another.

Gamma - the slope of the straight line segment on the radiometric response curve of a video cam- era or other display device.

Geometric Distortion - refers to changes in shape and position of objects in the image with respect to their true shape and position.

Grey Step - a target with no contrast (ie. which had the same reflectivity characteristics across its entire surface) that conforms to the logarithmic reflectivity profile of a PortaPattern video grey scale. I t was used to produce a flat field image.

Histogram Normalization - a function that inputs the histogram of an image into a computer, then maps the pixel intensity levels such that the histogram of the output image is normally distributed.

Image Motion - image blur caused when the forward motion of the aircraft is too great.

Light-transfer Characteristics - a series of images recorded a t successively higher exposures which was used to define the relationship between input light intensity and the digital output of the camera.

Luminance - the luminous intensity per unit projected area of a given surface viewed from a given direction.

Modulation Transfer Function - a method that uses the amplitude and phase information of Fourier Transforms to report the spatial image forming characteristics of an optical system.

Multispectral Imagery - employment of one or more sensors to obtain imagery from different por- tions (bands) of the electromagnetic spectrum.

Observation - each measurement of a variable or case.

Offset - the intercept value determined by regression analysis.

Parallax - the apparent displacement inn the position of an object, with respect h a frame of refer- ence (the image), caused by a shift in the position of observation.

Photogrammetry - the science or art of obtaining reliable measurements by means of photography.

Pixel - picture element.

Radiometer - an instrument for measuring the intensity of electromagnetic radiation in some band of wavelengths in any part of the electromagnetic spectrum.

Radiometric Correction Table - a computer file containing radiometric correction coefficients.

Radiometric Nonlinearity - the nonlinear shape of a curve defining the radiometric response model of an imaging device.

Real Time - an expression used to refer to any system in which the processing of data input to the system to obtain a result occurs virtually simultaneously with the event generating the data.

Reseau Marks - materials deposited on the photoconductive surface of a video camera to provide geometric reference.

Residual Image - a phenomenon in which an image from a video camera is affected by preceding images.

Resolution - the ability to distinguish closely spaced objects or changes in reflectance on an image or photograph. Commonly expressed as the spacing, in line pairs per unit distance, of the most closely spaced lines that can be distinguished.

Shading - the combined effects of image fall-off, vignetting and non-linear camera response.

Shutter Exposure Time - the length of time the camera shutter remains open.

Sign Change - the process of changing a positive number to a negative number.

Spectral Sensitivity - the response, or sensitivity, of a film or detector to radiation in different spec- tral regions.

Spectral Transmission - the lens is opaque to a number of bands in the photographic spectrum.

Spherical Coordinates - a coordinate system utilizing polar coordinates (two dimensional) to define three dimensional space. Azimuth refers to the horizontal dimension while elevation re- fers to the vertical dimension.

Stereometry - the art or technique of depicting solid bodies on a plane surface using stereoscopic equipment.

Streak Fbmoval - replacing lost image scan lines, or caused by the sensitivity of one dector being higher or lower than the others.

Sub-pixel - features in a scene that are smaller than one pixel.

System Error - any systematic distortion that obscures or reduces the radiometric clarity or quality of a video signal.

Template Image - an image that characterizes the spatial radiometric distortions that are intro- duced into a video image by the video camera.

Time Base Errors - errors introduced into the video scanning and sampling data rates when con- necting a video camera or recorder to a computer.

Vignetting - the internal shadowing resulting from the lens mounts and other aperature surfaces within the camera.

Wavelength - the distance between successive wave crests, or other equivalent points, in a har- monic wave such as those found in the electromagnetic spectrum.

Wien's Displacement Law - describes the shift of the radiant power peak to shorter wavelengths with increasing temperature.

REFERENCES

Aldenderfer, M.S. and R.K. Blashfield, 1984. Cluster Analysis, Sage Publications, Beverly Hills, pp. 22-24.

Bernstein, R., 1976. Digital Image Processing of Earth Observation Sensor Data; IBM Journal of Research Development, Vol. 20, No. 1, pp. 40-57.

Castleman, K.R., 1979. Digital Image Processing, Prentice-Hall, Englwood Cliffs, N.J., pp. 383-397.

Clark, B.P., 1981. Landsat 3 Return Beam Vidicon Response Artifacts, EROS Data Center, U.S. Geological Survey, Sioux Falls, South Dakota 57198, pp. 13.

Collet, M., 1985. Make Way for Solid State Imagers; Photonics Spectra, September, pp. 103-113.

Colwell, R.N. (Editor), 1983. Manual ofRemote Sensing, American Society of Photogrammetry, 2 10 Little Falls Street, Falls Church, Virginia 22046, 2440 pp.

Curry, S., S. Baumrind and J.M. Anderson, 1986. Calibration of an Array Camera; Photogrammetric Engineering and Remote Sensing, Vol. 52, No. 5, pp. 627-636.

Danielson, G.E. and D.R. Montgomery, 1971. Calibration of the Mariner Mars 1969 Television Cameras; Journal of Geophysical Research, Vol. 76, No. 2, pp. 418-431.

Edwards, G.J., 1982. Near-Infrared Aerial Video Evaluation for Freeze Damage; Proceedings of the Florida State Horticultuml Society, Vol. 95, pp. 1-3.

El-Hakin, S.F., 1986. Real-Time Image Metrology with CCD Cameras; Photogrammetric Engineering and Remote Sensing, Vol. 52, No. 11, pp, 1757-1766.

Everitt, J.H., D.E. Escobar, C.H. Blazquez, M.A. Hussey and P.R. Nixon, 1986. Evaluation of the Mid-Infrared (1.45 to 2.Opm) with a Black and White Infrared Video Camera; Photogrammetric Engineering and Remote Sensing, Vol. 52, No. 10, pp. 1655- 1660.

Everitt, J.H. and P.R. Nixon, 1985. False Colour Video Imagery: A Potential Remote Sensing Tool for Range Management; Photogrammetric Engineering and Remote Sensing, Vol. 5 1, No. 6, pp. 675-679.

Fox, D.J. and K.E. Guire, 1976. Documentation For Midas; Statistical Research Laboratory, The University of Michigan, Third Edition, 203 pp.

Fusco, L. and A. Zandonella, 1981. Earthnet RBV Shading Correction Scheme; European Space Research Institute, Earthnet Programme Office, Frascatti, Italy, 18 pp.

Gausman, H.W., D.E. Escobar and R.L. Bowen, 1983. A Video System to Demonstrate Interactions of Near-Infrared Radiation with Plant Leaves; Remote Sensing of Environment, Vol. 13, pp. 363-366.

Gerten, D.M. and M.V. Wiese, 1987. Microcomputer-Assisted Video Image Analysis of Lodging in Winter Wheat; Photogrammetric Engineering and Remote Sensing, Vol. 53, No. 1, pp. 83-88.

Green, W.B., 1983. Digital Image Processing: A Systems Approach, Van Nostrand Reinhold Company, Toronto, pp. 97-104.

Green, B., P.L. Jepsen, J.E. Kreznar, R.M. Ruiz, A.A. Scheartz and J.B. Seidman, 1975. Removal of Instrument Signature From Mariner 9 Television Images of Mars; Applied Optics, Vol. 14, NO. 1, pp. 105-114.

Hodgson, R.M., F.M. Cady and D. Pairman, 1981. A Solid State Airborne Sensing System for Remote Sensing; Photogrammetric Engineering and Remote Sensing, Vol. 47, No. 2, pp, 177-182.

International Imaging Systems, 1981. User Manual; International Imaging Systems Inc., 1500 Buckeye Drive, Milpitas, CA 95035-7484.

Kodak, 1970. Kodak Filters For Scientific and Technical Uses, Eastman Kodak Company, Rochester, N.Y. 14650,89 pp.

Kohchase, C.E. and P.A. Penzo, 1977. Voyager Mission Description; Space Science Reviews, Vol. 21, pp. 77-101.

Liedtke, J., A. Roberts, D.J. Evans, 1986. Discrimination of Suspended Sediment and Littoral Features using Multispectral Video Imagery; Proceedings 10th Canadian Symposium on Remote Sensing, pp. 739-747.

Lillesand, T.M. and R.W. Kiefer, 1979. Remote Sensing and Image Interpretation. John Wiley and Sons, Toronto, 612 pp.

Lyon, J.G., J.F. McCarthy and J.T. Heinen, 1986. Video Digitization of Aerial Photographs for Measurement of Wind Erosion Damage on Converted Rangeland; Photogmmmetric Engineering and Remote Sensing, Vol. 52, No. 3, pp. 373-377.

Manzer, F.E. and G.R. Cooper, 1982. Use of Portable Videotaping for Aerial Infrared Detection of Potato Diseases; Plant Disease, pp. 665-667.

Meisner, D.E., 1986. Fundamentals of Airborne Video Remote Sensing; Remote Sensing of Environment, Vol. 19, pp. 63-79.

Meisner, D.E. and O.M. Lindstrom, 1985. Design and Operation of a Colour Infrared Aerial Video System; Photogrammetric Engineering and Remote Sensing, Vol. 5 1, No. 5, pp. 555-560.

NASDA, 1981. Shading Correction of Landsat RBV Data; National Space Development Agency of Japan, 2-4-1 Hamamatsu-cho, Minato-ku, Tokyo, Japan 105, 12 pp.

Nixon, P.R., D.E. Escobar and R.M. Menges, 1985. A Multiband Video System for Quick Assessment of Vegetal Condition and Discrimination of Plant Species; Remote Sensing of Environment, Vol. 17, pp. 203-208.

Oke, T.R., 1978. Boundary Layer Climates; John Wiley and Sons, New York, p. 10.

Patterson, W.R., F.O. Huck, S.D. Wall and M.R. Wolf, 1977. Calibration and Performance of the Viking Lander Cameras; Journal of Geophysical Research, Vol. 82, No. 28, pp. 439 1-4400.

Richardson, A.J., R.M. Menges and P.R. Nixon, 1985. Distinguishing Weed from Crop Plants Using Video Remote Sensing; Photogrammetric Engineering and Remote Sensing, Vol. 51, No. 11, pp. 1785-1790.

Rindfleisch, T.C., J.A. Dune, H.J. Frieden, W.D. Stromberg, and R.M. Ruiz, 1971. Digital Processing of the Mariner 6 and 7 Pictures; Journal of Geophysical Research, Vol. 76, No. 2, pp. 394-417.

Roberts, A. and D.J. Evans, 1986. Multispectral Video System for Airborne Remote Sensing: Sensitivity Calibrations and Corrections; Proceedings 10th Canadian Symposium on Remote Sensing, pp. 729-73 7.

Roberts, A. and J. Liedke, 1986. Airborne Definition of Suspended Surface Sediment and Intertidal Environments in the Fraser River Plume, British Columbia; Current Research, Part A, Geological Survey of Canada, Paper 86-lA, pp. 571-582.

Smith, B.A., G.A. Briggs, G.E. Danielson, A.F. Cook, M.E. Davies, G.E. Hunt, H. Masursky, L.A. Soderblom, T.C. Owen, C. Sagan and V.E. Suomi, 1977. Voyager Imaging Experiment; Space Science Reviews, Vol. 21, pp. 103-127.

Smokler, M.I., 1968. Calibration of the Surveyor Television System; Society of Motion Picture and Television Engineers, Vol. 77, No. 4, pp. 3 17-323.

Thorpe, T.E., 1973. Verification of Performance of the Mariner 9 Television Cameras; Applied Optics, Vol. 12, No. 8, pp. 1775-1784.

Tsuchiya, K., K. Arai and C. Ishida, 1981. Report on the Shading Data; National Space Development Agency of Japan, 2-4-1 Hamamatsu-cho, Minato-ku, Tokyo 105, 8 pp.

Vleck, J. and E. Cheung, 1984. Video Image Analysis; Proceedings 8th Canadian Symposium in Remote Sensing, pp. 63-69.

Vlcek, J. and D. King, 1984. Digital Analysis of Multispectral Video Imagery; Technical Papers, 50th Annual Meeting of the American Society of Photogmmmetry, pp. 628-632.

Vleck, J., D. King and S. Shemilt, 1985. Multiband Video Imaging: Early Trials and Results; Proceedings 9th Canadian Symposium on Remote Sensing, pp. 541-546.

Yates, M., 1974. An Introduction to Quantitative Analysis in Human Geography, McGraw-Hill, Inc., p. 157.

Young, A.T., 1974. Television Photometry: The Mariner 9 Experience; Icarus, Vol. 21, pp. 262-282.