A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

Download A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

Post on 29-Aug-2016

213 views

Category:

Documents

0 download

TRANSCRIPT

  • ELSEVIER Computer Methods and Programs in Biomedicine 45 (1994) 291-305

    computer methods and programs in biomedicine

    A method for quantitative image assessment based on redundant feature measurements and statistical reasoning

    David J. Foran*, Richard A. Bergb

    Department of Pathology, University of Medicine and Dentistry of New Jersey, 675 Hoes Lane, Piscataway, NJ 08854. USA bCollagen Corporation, 2500 Faber Place, Palo Alto, CA 94303, USA

    Received 27 January 1994; revision received 5 July 1994; accepted 28 July 1994

    Abstract

    Advances in computer graphics and electronics have contributed significantly to the increased utilization of digital imaging throughout the scientific community. Recently, as the volume of data being gathered for biomedical applica- tions has begun to approach the human capacity for processing, emphasis has been placed on developing an automated approach to assist health scientists in assessing images. Methods that are currently used for analysis often lack suff- cient sensitivity for discriminating among elements that exhibit subtle differences in feature measurements. In addition, most approaches are highly interactive. This paper presents an automated approach to segmentation and object recog- nition in which the spectral and spatial content of images is statistically exploited. Using this approach to assess noisy images resulted in correct classification of more than 97% of the pixels evaluated during segmentation and in recogni- tion of geometric shapes irrespective of variations in size, orientation, and translation. The software was subsequently used to evaluate digitized stained blood smears.

    Keywords: Digital images; Shape descriptors; Chromaticity

    1. Introduction

    Digital imaging has become popular because it offers several advantages over conventional means of collecting and processing pictorial data. Once digitized, images can be analysed, compressed, processed, stored, or transmitted using several modes of communication.

    Image processing refers to the manipulation of digitally encoded images to aid in visual under- standing by humans or to facilitate subsequent

    * Corresponding author.

    computer analysis. One class of computer-based algorithms used to assist in the automated inter- pretation of images is referred to as pattern recog- nition. The term pattern recognition is generally defined as the ability to perceive structure in sensor-derived data. It is a fundamental activity that is intimately intertwined with our concept of intelligence [l]. In the context of this paper it refers to the computer procedures that operate on pictorial data to deliver interpretations of the digitized scene which are analogous to what a human might deduce were she to view the image directly.

    0169-2607/94/%07.00 0 1994 Elsevier Science Ireland Ltd. All rights reserved SSDI 0169-2607(94)01590-C

  • 292 D.J. Foran, R.A. Berg/Comput. Methods Programs Biomed. 45 11994) 291-30.5

    Although image processing and pattern recogni- tion have their own distinct application areas, fun- damental principles from each field have merged giving rise to machine vision. Many of the basic components associated with machine vision sys- tems have evolved from attempts to mimic the human visual system. The early stages of computer image processing are analogous to the process that takes place in the human eye and optic nerve, and the pattern recognition mechanisms represent the activities which take place in the human brain [l-7]. Machine vision has been used successfully in many industrial areas including robotics, in- spection, process control, material handling, navigation, and parts assembly [8], but advance- ments in the medical environment have proceeded at a slower rate due to the increased level of image complexity and due to the medical impact of an in- correct assessment [ 11.

    Segmentation is a crucial step for any machine- driven image assessment system since identitica- tion of any objects, structures, or shadows within a digitized field requires delineation into sub- regions [9]. Algorithms which are currently used to segment images range in complexity from those that utilize monochromatic intensity thresholding to those that rely on color algebra, color cluster- ing, or spatial filtering [4,10-141. Color-based ap- proaches sometimes improve the accuracy of image subdivision when compared with the per- formance of monochromatic schemes, but the al- gorithms often lack sufficient discriminatory power for differentiating among structures ex- hibiting similar spectral characteristics [ 15 191. Schemes that rely on spatial filtering can be com- putationally cumbersome when large numbers of specimens are evaluated and are often difficult to integrate into automated systems [4,20-221.

    Shape is a concept that has intuitive appeal but attempts to quantify shape using a computer have had only limited success [23-261. Algorithms used for shape analysis and object recognition typically rely upon the accuracy of segmentation operations and are especially difficult to codify when they are to be used in applications where objects may pre- sent a range of translations, rotations, and scales within the imaged scene [27].

    Our goal was to develop a system which could

    reliably and automatically segment chromatic im- ages and recognize delineated objects regardless of variations in spatial parameters. The algorithms have evolved from quantitative methods which we had originally developed to evaluate stained cells cultured on porous microcarriers (PMCs) and stained histological sections excised from guinea pigs [20,28]. The approach integrates principles of color theory, image processing, multivariate dis- criminant analysis, and spatial pattern recogni- tion

    2. Background

    2.1. Classl$cation The notion of establishing classification crit-

    erion for scientific gain is not a new concept - in fact Aristotle was one of the first to apply the tech- nique to biological taxonomy. Linnaeus later devised a classification scheme by which member- ship to one class rather than another depended on a single distinguishing attribute rather than by any measured value. The main change in classification has been an emphasis on measurement. Post- Darwinians searched for a reliable method of quantifying association in measures of biological activity and it was not until Galtons discovery of the correlation coefficient in 1883 that this quest was brought to fruition. Tables of correlation began to appear in scientific journals and ultimate- ly gave rise to the multivariate statistical analysis which utilizes inter-class and intra-class correla- tions to make evaluations [29]. At that time most calculations were made by hand and it was not until the 1950s that the multivariate analysis was executed by computer [30,3 11.

    Today there are many state of the art imaging systems which have been developed for research or industrial applications that are based on first or higher order predicate calculus that have been ex- tended to allow for inductive inferences [32,33]. Popular inductive methods include interference matching, maximal unifying generalizations, con- ceptual clustering, or constructive induction 19,331.

    Another general category of induction methods that has been the subject of much research is neu- ral networks. These approaches have their founda-

  • D.J. Foran, R.A. Berg / Comput. Methods Programs Biomed. 45 (1994) 291-305 293

    tions in statistical analysis, through the use of discriminant functions, and are extended to first the perceptron and then on to more complex neural-net concepts. They are all based on the con- cept of finding the best set of coefficients, or weight vectors, that minimize error [9,34]. The main unifying concept for all of these induction methods is that they try to learn to classify input patterns into output patterns [9].

    2.2. Segmentation Segmentation of digital images is a crucial step

    for most automated machine vision systems since many of the subsequent interpretation steps de- pend on the reliability of this delineation process.

    One of the first thresholding methods used to segment images was the p-tile method [lo]. In this method, an image is assumed to consist of dark objects in a light background. By assuming that the percentage of the object area is known, the threshold is defined as the highest grey level which maps at least (100 - p) percent of the pixels into the objects in the thresholded image. An obvious drawback to this method is that it is not applicable to images in which the object area is not known 1211.

    Much of the early work in image analysis focus- ed on determining a reliable means of selecting the threshold value. One way to choose the threshold value is to search the histogram of grey levels, assuming that it is bimodal, and find the valley of the histogram. This technique is called the mode method [12]; however, this approach is not appro- priate for images with extremely unequal peaks or to those with broad and flat valleys [21]. An ele- gant refinement of this method assumes that the histogram is the sum of two composite normal functions and determines the valley location from the normal parameters [ 111.

    Single-threshold methods are useful in simple situations, but may not be dependable in cases in which the region of pixels to be delineated are not connected. Problems often arise when applying these techniques to images which present a back- ground of varying grey-level or regions which vary in grey-level by more than the threshold. Two modifications of the threshold approach which ameliorate these difficulties are: to high-pass filter

    the image to de-emphasize the low-frequency background variation and then apply the original technique to the corrected image; or to use a spatially varying threshold method such as that of Chow and Kaneko [ 111.

    The Chow-Kaneko technique divides the image up into rectangular subimages and computes a threshold for each subimage. If any subimage fails to have a threshold it receives interpolated thres- holds from neighboring subimages that are bimodal. The entire picture is thresholded by using separate thresholds for each subimage.

    A refined extension of this methodology is com- monly referred to as region growing. Reduced to its simplest form the region growing algorithm considers the entire image as a region and com- putes the histogram for each of the picture vector components. Next, a peak-finding test is applied to each histogram. If at least one component passes the test, the component with the most significant peak is chosen and two thresholds are determined, one on either side of the peak. Using these thresholds the image is divided into subregions. These steps are repeated until no new subregions are created. In the multispectral LANDSTAT im- aging system thresholding components of a picture vector is extended to recursive application of the technique to non-rectangular subregions [ 131.

    Other variations to the thresholding paradigm incorporate the use of color vector components. The vector may be further augmented with non- linear combinations of these components. For ex- ample, Nevatia extended and refined the Huekel operator for color edge extraction [ 15,161 by using intensity and normalized colors as feature measurements. Other chromatic segmentation ap- proaches include the histogram splitting method of Ohlander et al. [17] and others [18], the clustering methods by Schachter et al. [35], Sarabi and Aggarwal [19], and a method based on edge- preserving smoothing by Nagao et al. [36].

    2.3. Shape analysis One of the most valued set of criterion measures

    that can be extracted from an image is a descrip- tion of the shape of the objects contained within the image. Shape is a primal intrinsic property for the visual system [37]. It can be defined as that

  • 294 D.J. Foran, R.A. Berg / Comput. Methods Programs Biomed. 45 (1994) 291-305

    quality of an object which depends on the relative position of all points composing its outline or ex- ternal surface.

    While no precise definition of object recognition has been generally accepted, it is usually con- sidered necessary to deduce an objective descrip- tion of the objects contained in the two-dimen- sional image. This description may be in terms of geometrical models such as cylinders, prisms and ellipsoids [38]. For many machine vision applica- tions, the one-dimensional closed contour of an object or region, which is reliably segmented, can be considered an unambiguous representation of it 141.

    Many techniques have been used to furnish the computer with an objective description of shape. Aoki and Yoshino [26] used chain codes as feature measurements in their object recognition algo- rithms. The chain code which had first been described by Freeman [25] approximates a con- tinuous contour by a sequence of piecewise linear tits that consist of eight standardized line segments. The code of a contour is then the chain V of length K

    v = (11a2a3 . . . ak

    where each link ai, is an integer between 0 and 7 oriented in the direction (?r/4)ai (as measured counter-clockwise from the x-axis of an x-y coor- dinate system) and of length 1 or J2 depending, respectively, on whether Ui is even or odd. The vector representation of the link Ui, using phasor notation is,

    ( 1 + (J2 - l) (1 - (-l)O.) I 2 > L -lrai 4 V, is an example chain code.

    V, = 0005676644422123

    The Freeman chain in four directions is the curve obtained by walking clockwise on the grid around and outside the squares that are more than half contained by the region [24].

    Shape numbers have also been used in certain recognition schemes [24]. The shape number is the

    derivative of the Freeman code obtained by the clockwise replacing of each convex corner of the Freeman chain by a 1, each straight corner by a 2, and each concave corner by a 3.

    Other techniques which had proved useful in optical character recognition research include Fourier analysis, boundary encoding and poly- gonal approximation [39-421. A particularly robust shape descriptor that had been reported is the elliptic Fourier descriptor, a variant of the more popular Fourier descriptor. These feature measurements had been shown to be effective in applications in which invariance to orientation, scale, and translation was required. The general form of Fourier descriptors of a closed curve can be described and computed using Eqs. l-4 [42]:

    a n = & ,$, 2 I

    x kos(F) -,,,( y)] (1)

    b, =

    x [sin(+) - sin( 2r;-1)] (2)

    x p(;) -,,,( y )] (3)

    d, =

    x [sin(+) -sin( 2rG-1)] (4)

    where,

    AXi = (x, - Xi - I), A+Yi = (Ji - Ji - I),

    Ati = J(Axi)2 + (Ayi)L

    (5)

    (6)

  • D. J. Foran, R.A. Berg / Comput. Methods Programs Biomed. 45 (1994) 291-305 295

    ti = ~ Atj (7) j= 1

    T= i Ati (8) i=l

    One can define the phase shift from the first major axis by

    e,L 2hh + c14) 2 a: + b: + c: + d; 1 (9)

    So to make the Fourier descriptors independent of starting point, 8, can be limited...

Recommended

View more >