a method for quantitative image assessment based on redundant feature measurements and statistical...

15

Click here to load reader

Upload: david-j-foran

Post on 29-Aug-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

ELSEVIER Computer Methods and Programs in Biomedicine 45 (1994) 291-305

computer methods and programs in biomedicine

A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

David J. Foran*“, Richard A. Bergb

‘Department of Pathology, University of Medicine and Dentistry of New Jersey, 675 Hoes Lane, Piscataway, NJ 08854. USA bCollagen Corporation, 2500 Faber Place, Palo Alto, CA 94303, USA

Received 27 January 1994; revision received 5 July 1994; accepted 28 July 1994

Abstract

Advances in computer graphics and electronics have contributed significantly to the increased utilization of digital imaging throughout the scientific community. Recently, as the volume of data being gathered for biomedical applica- tions has begun to approach the human capacity for processing, emphasis has been placed on developing an automated approach to assist health scientists in assessing images. Methods that are currently used for analysis often lack suff- cient sensitivity for discriminating among elements that exhibit subtle differences in feature measurements. In addition, most approaches are highly interactive. This paper presents an automated approach to segmentation and object recog- nition in which the spectral and spatial content of images is statistically exploited. Using this approach to assess noisy images resulted in correct classification of more than 97% of the pixels evaluated during segmentation and in recogni- tion of geometric shapes irrespective of variations in size, orientation, and translation. The software was subsequently used to evaluate digitized stained blood smears.

Keywords: Digital images; Shape descriptors; Chromaticity

1. Introduction

Digital imaging has become popular because it offers several advantages over conventional means of collecting and processing pictorial data. Once digitized, images can be analysed, compressed, processed, stored, or transmitted using several modes of communication.

Image processing refers to the manipulation of digitally encoded images to aid in visual under- standing by humans or to facilitate subsequent

* Corresponding author.

computer analysis. One class of computer-based algorithms used to assist in the automated inter- pretation of images is referred to as pattern recog- nition. The term pattern recognition is generally defined as the ability to perceive structure in sensor-derived data. It is a fundamental activity that is intimately intertwined with our concept of intelligence [l]. In the context of this paper it refers to the computer procedures that operate on pictorial data to deliver interpretations of the digitized scene which are analogous to what a human might deduce were she to view the image directly.

0169-2607/94/%07.00 0 1994 Elsevier Science Ireland Ltd. All rights reserved SSDI 0169-2607(94)01590-C

Page 2: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

292 D.J. Foran, R.A. Berg/Comput. Methods Programs Biomed. 45 11994) 291-30.5

Although image processing and pattern recogni- tion have their own distinct application areas, fun- damental principles from each field have merged giving rise to machine vision. Many of the basic components associated with machine vision sys- tems have evolved from attempts to mimic the human visual system. The early stages of computer image processing are analogous to the process that takes place in the human eye and optic nerve, and the pattern recognition mechanisms represent the activities which take place in the human brain [l-7]. Machine vision has been used successfully in many industrial areas including robotics, in- spection, process control, material handling, navigation, and parts assembly [8], but advance- ments in the medical environment have proceeded at a slower rate due to the increased level of image complexity and due to the medical impact of an in- correct assessment [ 11.

Segmentation is a crucial step for any machine- driven image assessment system since identitica- tion of any objects, structures, or shadows within a digitized field requires delineation into sub- regions [9]. Algorithms which are currently used to segment images range in complexity from those that utilize monochromatic intensity thresholding to those that rely on color algebra, color cluster- ing, or spatial filtering [4,10-141. Color-based ap- proaches sometimes improve the accuracy of image subdivision when compared with the per- formance of monochromatic schemes, but the al- gorithms often lack sufficient discriminatory power for differentiating among structures ex- hibiting similar spectral characteristics [ 15 191. Schemes that rely on spatial filtering can be com- putationally cumbersome when large numbers of specimens are evaluated and are often difficult to integrate into automated systems [4,20-221.

Shape is a concept that has intuitive appeal but attempts to quantify shape using a computer have had only limited success [23-261. Algorithms used for shape analysis and object recognition typically rely upon the accuracy of segmentation operations and are especially difficult to codify when they are to be used in applications where objects may pre- sent a range of translations, rotations, and scales within the imaged scene [27].

Our goal was to develop a system which could

reliably and automatically segment chromatic im- ages and recognize delineated objects regardless of variations in spatial parameters. The algorithms have evolved from quantitative methods which we had originally developed to evaluate stained cells cultured on porous microcarriers (PMCs) and stained histological sections excised from guinea pigs [20,28]. The approach integrates principles of color theory, image processing, multivariate dis- criminant analysis, and spatial pattern recogni- tion

2. Background

2.1. Classl$cation The notion of establishing classification crit-

erion for scientific gain is not a new concept - in fact Aristotle was one of the first to apply the tech- nique to biological taxonomy. Linnaeus later devised a classification scheme by which member- ship to one class rather than another depended on a single distinguishing attribute rather than by any measured value. The main change in classification has been an emphasis on measurement. Post- Darwinians searched for a reliable method of quantifying association in measures of biological activity and it was not until Galton’s discovery of the correlation coefficient in 1883 that this quest was brought to fruition. Tables of correlation began to appear in scientific journals and ultimate- ly gave rise to the multivariate statistical analysis which utilizes inter-class and intra-class correla- tions to make evaluations [29]. At that time most calculations were made by hand and it was not until the 1950s that the multivariate analysis was executed by computer [30,3 11.

Today there are many state of the art imaging systems which have been developed for research or industrial applications that are based on first or higher order predicate calculus that have been ex- tended to allow for inductive inferences [32,33]. Popular inductive methods include interference matching, maximal unifying generalizations, con- ceptual clustering, or constructive induction 19,331.

Another general category of induction methods that has been the subject of much research is neu- ral networks. These approaches have their founda-

Page 3: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

D.J. Foran, R.A. Berg / Comput. Methods Programs Biomed. 45 (1994) 291-305 293

tions in statistical analysis, through the use of discriminant functions, and are extended to first the perceptron and then on to more complex neural-net concepts. They are all based on the con- cept of finding the best set of coefficients, or weight vectors, that minimize error [9,34]. The main unifying concept for all of these induction methods is that they try to learn to classify input patterns into output patterns [9].

2.2. Segmentation Segmentation of digital images is a crucial step

for most automated machine vision systems since many of the subsequent interpretation steps de- pend on the reliability of this delineation process.

One of the first thresholding methods used to segment images was the p-tile method [lo]. In this method, an image is assumed to consist of dark objects in a light background. By assuming that the percentage of the object area is known, the threshold is defined as the highest grey level which maps at least (100 - p) percent of the pixels into the objects in the thresholded image. An obvious drawback to this method is that it is not applicable to images in which the object area is not known 1211.

Much of the early work in image analysis focus- ed on determining a reliable means of selecting the threshold value. One way to choose the threshold value is to search the histogram of grey levels, assuming that it is bimodal, and find the valley of the histogram. This technique is called the mode method [12]; however, this approach is not appro- priate for images with extremely unequal peaks or to those with broad and flat valleys [21]. An ele- gant refinement of this method assumes that the histogram is the sum of two composite normal functions and determines the valley location from the normal parameters [ 111.

Single-threshold methods are useful in simple situations, but may not be dependable in cases in which the region of pixels to be delineated are not connected. Problems often arise when applying these techniques to images which present a back- ground of varying grey-level or regions which vary in grey-level by more than the threshold. Two modifications of the threshold approach which ameliorate these difficulties are: to high-pass filter

the image to de-emphasize the low-frequency background variation and then apply the original technique to the corrected image; or to use a spatially varying threshold method such as that of Chow and Kaneko [ 111.

The Chow-Kaneko technique divides the image up into rectangular subimages and computes a threshold for each subimage. If any subimage fails to have a threshold it receives interpolated thres- holds from neighboring subimages that are bimodal. The entire picture is thresholded by using separate thresholds for each subimage.

A refined extension of this methodology is com- monly referred to as region growing. Reduced to its simplest form the region growing algorithm considers the entire image as a region and com- putes the histogram for each of the picture vector components. Next, a peak-finding test is applied to each histogram. If at least one component passes the test, the component with the most significant peak is chosen and two thresholds are determined, one on either side of the peak. Using these thresholds the image is divided into subregions. These steps are repeated until no new subregions are created. In the multispectral LANDSTAT im- aging system thresholding components of a picture vector is extended to recursive application of the technique to non-rectangular subregions [ 131.

Other variations to the thresholding paradigm incorporate the use of color vector components. The vector may be further augmented with non- linear combinations of these components. For ex- ample, Nevatia extended and refined the Huekel operator for color edge extraction [ 15,161 by using intensity and normalized colors as feature measurements. Other chromatic segmentation ap- proaches include the histogram splitting method of Ohlander et al. [17] and others [18], the clustering methods by Schachter et al. [35], Sarabi and Aggarwal [19], and a method based on edge- preserving smoothing by Nagao et al. [36].

2.3. Shape analysis One of the most valued set of criterion measures

that can be extracted from an image is a descrip- tion of the shape of the objects contained within the image. Shape is a primal intrinsic property for the visual system [37]. It can be defined as ‘that

Page 4: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

294 D.J. Foran, R.A. Berg / Comput. Methods Programs Biomed. 45 (1994) 291-305

quality of an object which depends on the relative position of all points composing its outline or ex- ternal surface’.

While no precise definition of object recognition has been generally accepted, it is usually con- sidered necessary to deduce an objective descrip- tion of the objects contained in the two-dimen- sional image. This description may be in terms of geometrical models such as cylinders, prisms and ellipsoids [38]. For many machine vision applica- tions, the one-dimensional closed contour of an object or region, which is reliably segmented, can be considered an unambiguous representation of it 141.

Many techniques have been used to furnish the computer with an objective description of shape. Aoki and Yoshino [26] used chain codes as feature measurements in their object recognition algo- rithms. The chain code which had first been described by Freeman [25] approximates a con- tinuous contour by a sequence of piecewise linear tits that consist of eight standardized line segments. The code of a contour is then the chain V of length K

v = (11a2a3 . . . ak

where each link ai, is an integer between 0 and 7 oriented in the direction (?r/4)ai (as measured counter-clockwise from the x-axis of an x-y coor- dinate system) and of length 1 or J2 depending, respectively, on whether Ui is even or odd. The vector representation of the link Ui, using phasor notation is,

( 1 + (J2 - l) (1 - (-l)O.) I 2 >

L -lrai 4

V, is an example chain code.

V, = 0005676644422123

The Freeman chain in four directions is the curve obtained by walking clockwise on the grid around and outside the squares that are more than half contained by the region [24].

Shape numbers have also been used in certain recognition schemes [24]. The shape number is the

derivative of the Freeman code obtained by the clockwise replacing of each convex corner of the Freeman chain by a 1, each straight corner by a 2, and each concave corner by a 3.

Other techniques which had proved useful in optical character recognition research include Fourier analysis, boundary encoding and poly- gonal approximation [39-421. A particularly robust shape descriptor that had been reported is the elliptic Fourier descriptor, a variant of the more popular Fourier descriptor. These feature measurements had been shown to be effective in applications in which invariance to orientation, scale, and translation was required. The general form of Fourier descriptors of a closed curve can be described and computed using Eqs. l-4 [42]:

a n = & ,$, 2 I

x kos(F) -,,,( ‘“y)] (1)

b, =

x [sin(+) - sin( 2”r;-1)] (2)

x p(“;‘) -,,,( ‘“y )] (3)

d, =

x [sin(+) -sin( 2”rG-1)] (4)

where,

AXi = (x, - Xi - I), A+Yi = (J’i - J’i - I),

Ati = J(Axi)2 + (Ayi)L

(5)

(6)

Page 5: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

D. J. Foran, R.A. Berg / Comput. Methods Programs Biomed. 45 (1994) 291-305 295

ti = ~ Atj (7) j= 1

T= i Ati (8) i=l

One can define the phase shift from the first major axis by

e,L 2hh + c14)

2 a: + b: + c: + d; 1 (9)

So to make the Fourier descriptors independent of starting point, 8, can be limited to the interval [-r/2, 7r/2], by using the transformation given by Eq. 10:

Further, to make the Fourier descriptors indepen- dent of scaling, each of a,2, bn2, c,,~ and d,,2 can be divided by a scaling factor D given by Eq. 11:

(11)

The elliptic Fourier descriptors denoted by I,,, J, and K,,n are then computed using Eqs. 12-14:

I,, = a,’ + 6: + c,’ + di (12)

J, = a,d, - b,c, (13)

K,,n = (a: + bf) (ai + 6:) + (cf + df) (c,’ + d:)

+ %cl + h4) (w, + W,) (14)

2.4. Biomedical Applications Developing software capable of interpreting im-

ages has been a focus of biomedical research for more than 30 years [43-481. One early imaging

system recognized leukocytes within stained cyto- logical sections by scanning the same image several times using different color filters, collecting the in- tensity measurements from cellular regions of known identity, and classifying the pixels within the image by comparing the intensities of pixels of unknown identity with the intensities recorded for pixels of known class using a Gaussian classifier. Contiguous groups of pixels that had been categorized as cytoplasmic and nuclear classes within the segmented image were measured with respect to their area and perimeter and were classified into one of several cytological classes using cross-correlation matching [49]. Other researchers attempted to differentiate among mononuclear peripheral blood cells which had been scanned monochromatically using clustering algorithms to compare the intensity of unclassified pixels with the intensity of pixels within the image which corresponded to blood cells of known class [W.

As cytological and histological staining techni- ques improved, researchers developed software to exploit the chromatic content of images. For ex- ample, Castillo et al. [47] relined the segmentation algorithms of Gonzalez and Wintz [51] to produce a system capable of crude delineation of sub- regions within color images. They used the system to differentiate between normal and diseased liver cells [47]. In 1989, Bergmann et al. [52] distin- guished among various types of lymphomas by segmenting images into cellular components using the color-difference methodology originally dev- eloped by Harms et al. [53], computing Kiels- classification feature measurements [54] for cyto- plasmic and nulcear components, and classifying the lymphomas using regression analysis [55]. Later that same year, Rauch et al. [48] devised a computerized means for recognizing the retro- grade reaction in motor neurons. Peak frequencies were determined for gray scale histograms of the images in order to facilitate sequential threshold- ing operations leading to segmentation. Area, diameter, and nuclear eccentricity measurements were computed for delineated cells within each segmented image and discriminant analysis was applied to spatial measures to categorize the cells into one of two classes [48].

Page 6: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

296 D. J. Foran. R.A. Berg / Comput. Methods Programs Biomed. 45 (19941 291-305

2.4.1. Monochromatic analysis. The statistical segmentation approach presented in this paper evolved from techniques which had originally been developed for delineating digitized specimens dur- ing cell culture and histology studies. While analysing cell cultures, stained specimens were photographed on color film and photonegatives scanned and digitized using an Optronics PlOOO two-dimensional microdensitometer. The micro- densitometer assigned an integer specular optical density value ranging from 0 to 255 to each ele- ment of the negative, with a value of 0 correspon- ding to zero optical density, and a value of 255 corresponding to a density of 3.0. Digitized images were subsequently displayed on a Lexidata Lex 90 display monitor interfaced to a DEC Microvax II while computer software directed an operator to select a set of representative class-one, class-two, and class-three pixels using a digital mouse. Opti- cal density values were arranged in ascending order for each class of pixels and images were segmented based on conformance to one of the computed ranges [56]. This method was later abandoned as lacking sufficient sensitivity for discriminating among structures which exhibited subtle differences in optical density profiles.

2.4.2. Trichromatic analysis. It was reasoned that the performance of the system might be im- proved by increasing the number of measured par- ameters. Accordingly, the method used to acquire images was modified by inserting three separate optical filters (red, green, blue) between the light source and the photonegative during scanning and digitization. This process served to separate the primary color components of the color photo- negatives. The second generation of algorithms segmented digitized specimens based on delinea- tions in chromatic integer measurements of pixels within the images. This methodolgy improved the overall sensitivity of the software significantly, but the system was still limited with respect to the number of classes that it could recognize.

The third generation of software based on multivariate statistical theory proved to be the most promising method of analysis [20,28]. In these algorithms the R, G, B digital data collected during computer training sessions was evaluated using the SAS statistical software package (SAS

Institute Inc., Cary, NC) on a Hewlett Packard HP9000/835 computer system. The statistical anal- ysis was employed to generate a set of classifica- tion functions which could reliably categorize pixels into one of the chromatic classes. The results showed that although multivariate analysis of chromatic intensity data was appropriate for auto- mated segmentation of stained biomedical images, the algorithms required further refinement. In ad- dition, it became apparent that a means for com- puting shape measurements for delineated objects should be integrated into the system in order to support a greater range of biological applications.

3. Design considerations

The primary goal of this research was to develop a reliable imaging system to automate segmenta- tion, shape analysis, and object recognition in digital chromatic images. Preliminary results had shown that applying multivariate segmentation al- gorithms to digitized stained specimens resulted in the correct classification of the majority of imaged pixels [20,28,57]. Our previous work also indicated that:

(1) The rate of processing images was impractical for applications in which large numbers of samples needed to be evaluated.

(2) Photography, scanning, and digitization rates need to be improved.

(3) Time spent transferring digital files to interme- diate computers for statistical evaluation con- stituted an inordinate amount of processing time.

(4) The system revealed little information regar- ding the shape of objects which had been delineated during segmentation.

(5) A means for testing the robustness and reli- ability of the system would need to be devised and implemented.

3.1. System description 3.1.1. Hardware. To address these issues the sys-

tem’s hardware was redesigned and assembled. An Hitachi ClH video color camera was mounted atop a Nikon light microscope and a Matrox MVP image acquisition board installed inside a Sun 3861

Page 7: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

D.J. Foran. R.A. Berg/Compu~. Methods Programs Biomed. 45 (1994) 291-305 291

Image Recorder Hitachi

Microscope Nikon

Fig. 1. System configuration. This figure shows the essential hardware and software components of the Sun 386i-based imaging workstation. Within the diagram are the Hitachi CIH video color camera, Nikon light microscope, Matrox MVP image acquisition board, Sony Trinitron color monitor, Hitachi color video image copier and image processing software.

Roadrunner UNIX workstation. The camera was interfaced to the acquisition board, a Sony Trini- tron color monitor, and a Hitachi color video image copier. Fig. 1 shows a diagram of the sys- tem. The color camera and Matrox acquisition board would eliminate photography and scanning steps completely, whereas the workstation would provide the computational power necessary for on-line generation of classification rules. The color video printer was integrated into the system to

allow for quick, reliable output of images, histo- grams, etc.

3.1.2. Software. All system software was dev- eloped using the C programming language and a Sunview (Sun Microsystems, Billerica, MA) gra- phic user interface. The principal modules of the software are as follows:

(1) An image acquisition/generation module. (2) A means for chromatic feature extraction.

Page 8: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

298 D.J. Foran, R.A. Berg/Comput. Methods Programs Biomed. 45 (1994) 291-305

(3) A pixel-level decision-making component. (4) An object boundary detection module (con-

touring). (5) A means for shape feature extraction. (6) An object-level decision-making component.

3.2. Performance The system would be tested using three parame-

ters to gauge performance: adaptability to varia- tions in chromatic feature measurements; dynamic potential for sensitivity; and reliability under stress due to controllable digital noise. To accomplish these objectives, 35 test images exhibiting 264 chromatic classes of pixels and nine geometric classes of shapes were generated using the Matrox- MVP data acquisition board interfaced to the Sun 386i Roadrunner workstation and software writ- ten in C. Once the test images were generated band-limited Gaussian distributions of digital noise were assigned to random x and y coordinates corresponding to geometric objects within the im- ages using a number generator [58]. The generator used the time of day in micro-seconds as a seed value and returned a random number ranging from 0 to ((23’) - 1) which was then scaled ap- propriately. Distributions were determined using Eq. 15. Similarly, Gaussian distributions of digital noise were assigned to random background pixels throughout the image.

1 P(x) = (27ra)- e - ( 0 5(x - p)/o)’ (15)

where x is one of the pixel intensities in the, user- defined, band-limited range, P(x) is the probability that any pixel within a given object has an intensi- ty of x, p is the mean red, green, and blue intensity of one of the geometric objects within the image, and u is the standard deviation about the mean in- tensity value of each geometric object. A set of sample distributions are shown graphically in Fig. 2.

3.2.1. Multivariate segmentation. Once the noisy images were generated they were displayed on the color monitor of a Sun386i workstation while a software ‘training’ session was executed and red, green, blue intensity values were recorded for 20 representative pixels from each chromatic class.

Band-limited Noise Distributions

ii!iiiJ

o,;; , [,h , j

100 105 110 115 120 125 130 Intensity

Fig. 2. Graphic representation of four levels of Gaussian Noise. The figure shows distributions of digital noise that were in- troduced to the red image plane of one geometric object within one of the computer generated images. Note that prior to in- troducing noise to the image the probability that any pixel in the object had a red intensity value of 115, was lOO%, P(il5) = 1.0. The curve with x-intercepts 113 and 117 depicts the probabilities of any pixel in the object containing an integer intensity value between 113 and 117 after the first stage of noise introduction. Moving outward in the graph, the curve with x- intercepts 111 and 119 depicts the probabilities of any pixel in the object containing an integer intensity value between 111 and 119 after the second stage of noise introduction. Moving outward again, the curve with x-intercepts 109 and 121 depicts the probabilities of any pixel in the object containing an integer intensity value between 109 and 121 after the third stage of noise introduction. Moving outward still further, the curve with x-intercepts 101 and 129 depicts the probabilities of any pixel in the object containing an integer intensity value between 101 and 129. Random pixels within the red image plane of the unaltered object were replaced with the prescribed distribution of digital noise. Noise was then introduced to the green and blue image planes of the object in the same manner using the green and blue intensities as seed values for those distributions. Similarly, noise was introduced to each object of each image and resulting noisy images were written to disk for subsequent evaluation.

The software automatically analysed the chro- matic measurements recorded to generate discrimi- nant classification functions which could reliably determine the chromatic class of each pixel within a given image. An example of the algorithm is presented using two chromatic classes of pixels and measurable variables rather than integer measurements. Table 1 gives a listing of the nota- tion for measurable variables, variable means, and differences in measurable variables. The central strategy of the algorithm is to determine the func-

Page 9: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

D.J. Form, R.A. Berg/Compu~. Melhods Programs Biomed. 45 (1994) 291-305 299

Table 1 Variables used to Compute the Two-Class Discriminant Function

Measurable variables

Class I (mean)

Class II (mean)

Differences

XI x2 X3

x4

512 4 x22 4

x32 4

:42 4

Summary of the variables used in Eqs. 16 and 17. Measurable variables are listed in column one, variable means in column two, and differences in measurable variables in column three.

tion that best discriminates among the chromatic classes. This function can be described by Eq. 16, where Xi to X, denote the unknown coefficients and xl to x4 are the measurable variables.

x = h,x, + h2X2 + h3X3 + x4x4 (16)

So for any function X, the difference between the means in measurements for the two separate classes may be given by Eq. 17:

D = X,d, + h2d2 + h3d3 + X4d4 (17)

where

dl = Xl1 - X12, d2 = X,2 - X22, etc. (18)

d, is the difference between class one and class two with respect to the first measurable variable and d2 is the difference between class one and class two with regard to the second measurable variable, etc.

The sum of the square distances across the four measurable variables can be computed directly. The within-class variance is proportional to the dispersion matrix whose elements are shown in Table 2. The dispersion matrix reduces to Eq. 19:

3 = f: i XpAqSpq p=l q=l

(19)

Table 2 Dispersion Matrix

This table lists the elements of the dispersion matrix. Where Xi, through X4 correspond to the unknown coefficients in X and St, through S, represent variations in measurable variables.

The software determines the function, X, which best discriminates between the two classes by vari- ation of the four coefficients (A], h2, X3, X4), in- dependently. The optimal solution is that which produces the greatest between-class difference in measurable variables with respect to the within- class difference in measurable variables, the D2/s ratio [30].

The approach was extended to the general seg- mentation problem in which N classes of pixels were to be delineated. In this case the measurable variables were increased from x1, x2, x3, x4, as shown in the example, to x1, . . , xN. Red, green, and blue intensity values recorded during the soft- ware ‘training’ session were used to compute measurements for hue, saturation, and luminance [59] and serve as measurements for variables x1, x2, x3, x4, x5, x6, respectively. The general form of the classification functions used to segment the noisy test images is shown in Eq. 20:

Ncriterion Ci=Ai+ c B& i = 13 . + . 9 Nclasses (20)

j= 1

where Ci are the chromatic pixel classes, Ai are constants, Bij are the computed coefficients, Ncriterion is the number of measured parameters, N classes is the number of pixel classes, and Fj are the red, green, and blue intensity values for j = 1 to 3, and luminance, saturation, and hue for j = 4 to 6. The index, i, denotes one of the chromatic classes of pixels.

The measurements which had been collected during the training session were passed through the discriminant functions to establish a set of

Page 10: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

300 D.J. Form, R.A. Berg / Comput. Methods Programs Biomed. 45 (1994) 291-305

composite score ranges for each chromatic class. The chromatic measurements of the remaining pix- els within the image were passed through the discriminant functions to determine their com- posite score. Pixels were assigned to that chro- matic class from which the pairwise-square dis- tance was smallest. Using this technique to seg- ment the noisy computer generated images resulted in a misclassification of less than 3% of the 34 406 400 pixels (140 images) evaluated. The error was determined by comparing the actual chromatic class of each pixel with the chromatic class as reported by the multivariate statistical al- gorithms.

3.2.2. Shape analysis. During a software training session designed to ‘teach’ the computer to recog- nize shapes, a noisy image containing nine geometric classes of objects was displayed to the color monitor of the Sun 3861 RoadRunner workstation while the objects were contoured using a digital mouse. During the session the (x,y) coordinates of each objects boundary was record- ed. Fig. 3 shows one of the noisy images used to train the software and the corresponding contours of geometric objects. To determine the capacity of

the system to recognize shapes, segmented test im- ages which had been delineated using the stat- istical algorithms were forwarded to the shape analysis module of the software where regions of contiguous pixels of the same chromatic class (ob- jects) were scaled to five different sizes, rotated through 360° in increments of 36”, and contoured, automatically using a modified version of the Pavlidis algorithm [60]. The (x,y) coordinates from each contoured object was used to compute 16 harmonics of the elliptic Fourier descriptor using variations of Eqs. 12-14 as detailed by Kuhl and Giardina [42]. Elliptic Fourier descriptors had been selected as shape measurements in order to address the following requirements:

(1)

(2)

(3)

The algorithms should recognize objects of the same shape-class through a range of sizes or magnifications. The algorithms should recognize objects of the same shape-class through a plurality of orien- tations. The algorithms should recognize objects of the same shape class regardless of translation of the objects throughout the image.

Fig. 3. Shape analysis training set. Nine shape classes of objects were contoured and their (x,~) coordinates used to ‘train’ the software regarding geometric shape. (A) One of the noisy images that was used to ‘train’ the software. (B) Contours corresponding to objects in Panel A.

Page 11: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

D.J. Foran, R.A. Berg/ Comput. Methods Programs Biomed. 45 11994) 291-305 301

A covariance matrix housing the computed shape descriptors was generated using Eq. 21:

G = I$ 2 c” (Fmlk - F,nd(F,n,,k - F,,) c m m=l k=l

(21)

where C,, is an element of C, N, is the number of classes, ZV, is the number of specimens in the mth ChSS, F,,,/k is the Ith element of the feature vector of the kth specimen of the mth class and F,, is the Ith element of the mean feature vector of the mth class and 1 = 1, 2, . . . , 16 and n = 1, 2, . . . , 16.

The contoured objects were assigned to that shape class to which the Mahalanobis distance was smallest. The Mahalanobis metric is described in Eq. 22:

d,,,, = (F, - FJT C-’ (F, - Fx) (22)

where d,, is the Mahalanobis distance between the known group m and the unknown object x,

F, is the reference feature vector of the group m, F, is the test feature vector and C is the pooled covariance matrix. Using Mahalanobis distance metrics to determine the class of the shapes resulted in recogniton of 100% objects (square, rectangle, isoceles triangle, circle, arc, oval, hex- agon, pentagon, octagon) irrespective of orienta- tion, translation, and scale.

3.3. Cytology application Based on the performance of the statistical algo-

rithms when applied to noisy computer generated images it was reasoned that the system was ready to be tested on biomedical data. Stained blood smears extracted from human bone marrow were obtained from surgical pathologists at Robert Wood Johnson University Hospital (RWJUH), New Brunswick, NJ. The specimens were placed on the stage of the Nikon microscope and sampled by the Hitachi color video camera on three chan- nels (R,G,B). While viewing the images on the Sony Trinitron monitor the course and fine adjust-

Fig. 4. Digitized blood smear and frequency histogram. (A) A video copy of digitized stained blood smear at an instrument magnifica- tion of 1000 x During the software training session a IOO-pixel x lOO-pixel area about the cursor was bilinearly interpolated by the software and displayed continuously on a small window to reduce the chances of a misdirected selection by the operator. (B) A bilinearly interpolated region of the image. (C) The three plots at the top of the panel are the red channel (R), green channel (G), and blue channel (B) frequency histograms of the image whereas the fourth plot shows the composite frequency histogram. On the abscissa of each plot are the intensity values, whereas the ordinate corresponds to the number of pixels at each intensity.

Page 12: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

302 D.J. Foran, R.A. Berg / Comput. Methods Programs Biomed. 45 (1994) 291-305

ment knobs of the microscope were adjusted to bring the specimen into focus. It was necessary to incorporate a preprocessing module into the soft- ware to address issues of variable lighting and instrument noise. Variable illumination across microsopic fields was compensated for by applying flat-field correction to each digital image [ 141. Fur- ther, 60 frames of video were digitized and averag- ed for each field of interest using the Matrox MVP acquisition board and software written in C [61] to suppress the effects of digital abberrations and shot noise. The averaged images were then written to the disk of the Sun386i workstation in 512 x 480 x 24 bit RGB binary format. Some of the remaining random optical and electronic noise was reduced by using a smoothing operation to distribute digital noise evenly among neighboring

pixels of the image on R, G, and B planes. Finally, to guard against suppressing the desirable image features such as contrast and boundaries, which also tend to have high spatial frequencies, the image was convolved with a sharpening filter on R, G, and B planes, thereby accentuating contrast and boundaries [ 141. Resulting images were displayed on the canvas of the Sun386i worksta- tion, while computer software directed an operator to select a typical set of class-one pixels using a digital mouse. The operator repeated the selection process for N classes of pixels. The R,G,B inten- sities, hue, saturation, luminance, (x,y) coor- dinates, and class were recorded for each of the pixels selected. During the entire course of the ses- sion, pixels surrounding the cursor were magnified using bilinear interpolation and displayed on a

Fig. 5. The results of segmenting and retining the digitized blood smear using the multivariate statistical methods. (A) Pixels which were classified as cytoplasmic material of the monocytes illuminated. (B) Red blood cell pixels in white. (C) Pixels which were classified as nuclear material of monocytes illuminated. (D) Background pixels rendered in white while all other pixels are rendered in black. (E) Pixels of the nucleated red blood cell illuminated. (F) The contour of six biological objects. The contours represent the following shape classes: red blood cells, nuclei of monocytes, and nuclei of polymorphonuclear cells.

Page 13: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

D.J. Foran, R.A. Berg / Cornput. Methods Programs Biomed. 45 (1994) 291-305 303

companion window to reduce the chances of a misdirected selection. Fig. 4 shows a video copy of a representative digitized stained blood smear, a magnified subimage, and the frequency histograms of the image. The statistical segmentation metho- dology used to delineate computer generated images was now applied to digitized cytological specimens and resulting segmented images were refined by examining the class of eight neighboring pixels, computing the probability that a misclas- sification occured during the segmentation opera- tion, and reclassifying any pixel exhibiting a high probability (more than 87%) of having been misclassified. Panels A-F of Fig. 5 show the re- sults of using this approach to process a digitized specimen.

During a software training session designed to ‘teach’ the computer to recognize biological shapes, the segmented image was displayed to the color monitor of the Sun 3861 RoadRunner workstation while six objects were selected using a computer mouse. Objects representing each shape class were scaled to five different sizes, rotated through 360” in increments of 36”, and contoured. Representative contours are shown in panel F of Fig. 5. The (x,y) coordinates from all contoured objects were used to compute the elliptic Fourier descriptor for each. With the identity of the objects masked from the software each structure was assigned to one of the three shape classes using Mahalanobis metrics as described in Eq. 22. Polymorphonuclear cells, monocytes, and denu- cleated red blood cells were recognized regardless of variations in translation, magnification, and orientation.

4. Status report

A system has been developed to automate the processes of segmenting chromatic images, con- touring objects, and performing shape analysis and object recognition. The algorithms were shown to be effective for delineating computer generated images under noisy conditions and for segmenting digitized stained cytological speci- mens. The shape analysis algorithms within the system have demonstrated the capacity to recog- nize nine geometric shapes and three biological

shape classes regardless of fluctuations in the size, orientations, or translation of the objects through- out the images.

5. Lessons learned

The image interpretation approach presented in this paper can be described as an inductive method in which a general classification algorithm is generated from a set of specific examples. It has deep roots in image processing, discriminant anal- ysis, and pattern recognition. The system has evolved from a series of design modifications that were incorporated into the algorithms to accom- modate a range of biomedical applications.

One of the difficulties that we experienced in constructing the system was in selecting feature measurements which simultaneously provided flexibility, robustness, and reliability for auto- mated image interpretation. The primary objective of the feature extraction was to extract quantita- tive measurements from the image which described its essence. Since there is no means for making an a priori choice of the most significant features rele- vant to the general imaging problem, the selection was an empirical process and required a con- siderable amount of experimentation. The chro- matic feature measurements which were shown to work best for this system were those modelled after the internal color features of human color perception [59]. Regarding the object recognition module of the system, shape feature measurements were selected to be appropriate irrespective of var- iations in magnification settings on the microscope and regardless of the orientation of the specimens. The success of the system is attributed to three pri- mary features:

(1) The statistical algorithms maximize the differ- ence in chromatic measurements among pixel classes prior to segmentation.

(2) The system distributes pixel misclassifications across the chromatic classes thereby optimiz- ing the segmentation process.

(3) The redundancy of chromatic and spatial measurements provides versatility and stabili- ty to the system even under noisy conditions.

Page 14: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

304 D.J. Foran, R.A. Berg/Comput. Methods Programs Biomed. 45 (1994) 291-30.5

The system has been shown effective for 264 chromatic classes of pixels, each with several levels of digital noise on three image planes, and for recognizing shapes irrespective scale, translation and rotation, but the algorithms do have a theoret- ical limitation. The statistical algorithms are only appropriate for applications in which the within- class variation in feature measurements does not exceed the variation in feature measurements across the chromatic and shape classes.

6. Future plans

Although the system is not a substitute for human reasoning it may provide objective infor- mation that can assist clinicians and researchers in the interpretation process. The system was shown to be a flexible, objective tool for image segmenta- tion and may have potential as a general purpose object recognition system provided that it is linked to comprehensive shape libraries.

With the assistance of surgical pathologists at RWJUH, we are expanding the scope and size of the biological shape database. In addition, we are modifying the software by exchanging an X- Window graphic user interface for the existing Sunview graphic user interface. This modification should increase the portability of the software. Once the software modifications are completed and the shape database has been expanded we hope to make the system available to other institu- tions via anonymous internet connection.

Acknowledgments

The authors are grateful to surgical pathologists at the Department of Pathology, Robert Wood Johnson University Hospital, for supplying cytological samples used in the analysis and to researchers at the Department of Pathology, UMDNJ-Robert Wood Johnson Medical School for their advice and support.

References

[I] V. Capellini and R. Marconi, Advances in Image Pro- cessing and Pattern Recognition, pp. 3-7 (Elsevier Sci- ence Publishers, New York, 1986).

I21

I31

I41

I51

I61

[71

I81

I91

I101

Ill1

1121

1131

1141

I151

1161

1171

I181

1191

1201

V.11

WI

1231

A. Pugh, Robot Vision, pp. 133-177 (Springer-Verlag, Heidelberg, 1983). D. Marr. Vision, pp. 167-205 (W. H. Freeman & Com- pany, San Francisco, 1982). D. Ballard and C. Brown. Computer Vision (Prentice- Hall, Englewood Cliffs, 1982). M. Levine, Vision in Man and Machine, pp. 85-99 (McGraw-Hill, New York, 1985). S. Tanimoto and A. Klinger, Structured Computer Vision, pp. 73-87 (Academic Press, New York, 1980). R. Nevatia, Machine Perception, pp. 56-80 (Prentice- Hall, Englewood Cliffs, 1982). G. Gini and M. Gini, A software laboratory for visual in- spection and recognition. J. Pattern Recognition I8 (1985) 43-51. S. Umbaugh, R. Moss, W. Stoecker and G. Hance, Automatic color segmentation algorithms with applica- tion to skin tumor feature identification. Eng. Med. Biol. Mag. 12(3) (1993) 77-78. W. Doyle, Operations useful for similarity in avariant pattern recognition. J. Assoc. Comput. Mach. 9 (1962) 259-67. C. Chow and T. Kaneko, Boundary detection and volume determination of the left ventricle from ci- neangiograms. Comput. Biomed. Res. 3 (1973) 13-26. J. Prewitt and M. Mendelsohn, The analysis of cell im- ages. Ann. NY Acad. Sci. 128 (1966) 1035-1053. T. Robertson, P. Swain and K. Fu, Multispectral image partioning. TR-EE, 73 (1973) 26-52. S. Inoue, Video Microscopy, pp. 328-329 (Plenum Press, New York, 1986). M. Huekel, A local visual operator which recognizes edges and lines. J. ACM 20(4) (1973) 634-647. R. Nevatia, Evaluation of a simplified Huekel edge-line detector. Comput. Graphics Image Proc. 6 (1977) 582-588. R. Ohlander, K. Price and D. Reddy, Picture segmenta- tion using a recursive region splitting method. Comput. Vision Graphics Image Proc. 8 (1978) 313-333. K. Laws, The phoenix image segmentation system: de- scription and evaluation, SRI Int. Conf. 289 (1982)

289-315. A. Sarabi and J. Aggarwal, Segmentation of chromatic images. Pattern Recognition I3 (1981) 417-427. D.J. Foran, F. Cahn, and E. Eikenberry, Assessment of cell proliferation on porous microcarriers by image anal- ysis. Anal. Quantitative Cytol. Histol. I3 (1991) 215-222. P. Sahoo, S. Soltani, A. Wong and Y. Chen, A survey of thresholding techniques. Comput. Vision Graphics Image Proc. 41 (1988) 233-260. M. Pietikainen and D. Harwood, Edge information in color images based on histograms of differences, pp. 112-l I3 (CAR-TR-University of Maryland, Center for Automation Research, 1985). S.P. Smith and A.K. Jain, Chord distributions for shape matching. Comput. Graphics Image Proc. 20 (1982) 259-271.

Page 15: A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

D.J. Foran. R.A. Berg/Comput. Methods Programs Biomed. 45 (1994) 291-305 305

[241

LW

WI

1271

D81

1291

I301

1311

WI

[331

I341

[351

[361

[371

[381

[391

I401

1411

1421

i431

E. Bribiesca and A. Guzman, HOW to describe pure form and how to measure differences in shapes using shape numbers, in Proceedings of PRIP, pp. 427-436, August 1979. H. Freeman, Computer processing of line-drawing im- ages. Comput. Surv. 6 (1974) 57-96. Kyota Aoki and Katsuyuki Yoshino, Recognizer for Handwritten Script Words Using Syntactic Method, pp. 5-18 (World Scientific, New Jersey, 1989). M. Shridhar and A. Adrelin, High accuracy character re- cognition algorithms using Fourier and toplological descriptors. Pattern Recognition 17 ( 1984) 5 15-524. D.J. Foran, C. Ruppert and R.A. Berg, Histological slides evaluated using statistical image processing. Proc. IEEE Int. Conf. Med. Biol. 12 (1990) 185-186. R. Tryon and D. Bailey, Cluster Analysis, pp. 4-15 (McGraw-Hill, New York, 1970). C. Giri, Multivariate Statistical Inference, pp. 240-247 (Academic Press, New York, 1977). D. Kleinbaum, L. Kupper and K. Muller, Applied Regression Analysis and Other Multivariate Methods, pp. 562-571 (PWS-Kent Publishing Company, Boston, 1988). S. Inoue, Video Microscopy, pp. 390-391 (Plenum Press, New York, 1986). M. Genesereth and N. Nilson, Logical Foundations of Artificial Intelligence, pp. 19-45 (Morgan Kaufman, Palo Alto, CA, 1988). P Devijver and J. Kittler, Pattern Recognition: A Statistical Approach, pp. 11 I-136 (Prentice Hall, Lon- don, 1982). B. Schatcher, L. Davis and A. Rosenfeld, Scene segmen- tation by cluster detection in color space. SIGART Newslett. 58 (1976) 16-17. M. Nagao, T. Matsuyama and Y. Ikelda, Region extrac- tion and shape analysis of aerial photographs. SIGART Newslett. 58 (1976) 16-17. M. Friedhoff, The Second Computer Revolution, Visualization, pp. 18-19 (W.H. Freeman & Company, San Francisco, 1991). Chris Perrot and Leonard Hamey, Object recognition, a survey of the literature. Macguarie Comput. Rep. (January 1991) 1-41. G.H. Granlund, Fourier processing for hand print character recognition. IEEE Trans. Comput. C-21 (1972) 195-201. E. Persoon and F. King-Sun, Shape discrimination using Fourier descriptors. IEEE Trans. Syst. Man Cybern. SMC-7(3) (1975) 170-179. CT. Zahn and R.Z. Roskies, Fourier descriptors for plane closed curves. IEEE Trans. Comput. C-21 (1972) 269-28 I. F.P. Kuhl and CR. Giardina, Elliptic Fourier features of a closed contour. Comput. Graphics Image Proc. 18 (1982) 236-258. P. Montgomery, Scanning techniques in biology and medicine. Ann. NY Acad. Sci. 97 (1962) 329-526.

WI

I451

[461

[471

[481

I491

I501

[511

1521

I531

I541

[551

I561

[571

[581

I591

WI

[611

W. Tolles, Applications of counting and sizing in medi- cine and biology. Ann. NY Acad. Sci. 99 (1962) 23 I-334. C. Spencer and R. Bostrom, Performance of the cyto- analyzer in recent clinical trials. J. Natl. Cancer Inst. 29 (1962) 267-276. E. Nadel, Computer analysis of cytophometric fields by cydac and its historical evolution from the cytoanalyzer. Acta Cytol. 9 (1965) 203-2066. X. Castillo, A. Yorkgitis and K. Preston, A study of multidimensional multicolor images. IEEE Trans. Bio- med. Eng. 22 (1982) 111-120. F. Rauch, A. Scheicher and K. Zilles, Recognition of the retrograde reaction in motor neurons using an image analysis system. J. Neurosci. Methods, 30 (1989) 255-262. J. Bacus and J. Gose, Leukocyte pattern recognition. IEEE Trans. Syst. Man Cybernet. 4 (1972) 513-526. H. Harms, H. Gunzer, A. Ruter, M. Haucke and V. Ter Muelen, Computer aided analysis of chromatin network and basophil color for differentiation of mononuclear peripheral blood smears. J. Histochem. Cytochem. 27(l) (1979) 204-209. R. Gonzalez and P. Wintz, Digital Image Processing, pp. 130-187 (Addison-Wesley, Reading, MA, 1977). M. Bergmann, H. Heyn, H. Harms and R. Muller- Hermehnk. Computer aided cytometry in high grade mailignant non-hodgkin’s lymphomas and tonsils. Vir- chows Arch. B Cell Pathol. 58 (1989) 153-163. H. Harms, H. Gunzer and H. Aus, Combined local color and and texture analysis of stained cells. Comput. Vision Graphics Image Proc. 33 (1986) 364-376. A. Stansfield, J. Diebold, Y. Kapanci, K. Lennert, K.O. Mioduszewka, H. Noel, F. Rilke, C. Sunstrom and D. Wright, Updated Kiel classification of non-Hodgkin’s lymphomas, Lancet (1988) 292-293. L. Briemen, J. Friedman, R. Olshen and C. Stone, Classification and Regression Trees, pp. 99-145 (Wad- worth International, Belmont, CA, 1984). D.J. Foran, F. Cahn and E. Eikenberry, Quantifying cell growth on collagen matrix by computer. Proc. East Coast Connective Sot. l(1) (1989) 127-128. C. Ruppert, D.J. Foran, R. Scott and R.A. Berg, Recom- binant protease nexin enhances dermal wound healing in guinea pigs. Unpublished. S. Park and K. Miller, Random noise generators: good ones are hard to find. Commun. ACM, 10 (1988) 1192-1201. C. Garbay, Modelisation de la coleur dans le cadre de I’analyse d’images et de son application a la cytologic automatique. Ph.D. Thesis, Institut National Poly- technique de Grenoble, 1979. T. Pavlidis, Algorithms for Graphics and Image Process- ing, pp. 143 (Computer Science Press Incorporated, Rockville, MD, 1982). G. Ellis, S. lnoue and T. Inoue, Optical Methods in Cell Physiology, pp. 15-30 (Wiley, New York, 1986).