chapter 4 texture feature extraction -...

25
83 CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with various feature extraction technique based on spatial, transform, edge and boundary, color, shape and texture features. A brief introduction to these texture features is given first before describing the gray level co-occurrence matrix based feature extraction technique. 4.1 INTRODUCTION Image analysis involves investigation of the image data for a specific application. Normally, the raw data of a set of images is analyzed to gain insight into what is happening with the images and how they can be used to extract desired information. In image processing and pattern recognition, feature extraction is an important step, which is a special form of dimensionality reduction. When the input data is too large to be processed and suspected to be redundant then the data is transformed into a reduced set of feature representations. The process of transforming the input data into a set of features is called feature extraction. Features often contain information relative to colour, shape, texture or context. 4.2 TYPES OF FEATURE EXTRACTION Many techniques have been used to extract features from images. Some of the commonly used methods are as follows:

Upload: others

Post on 29-Oct-2019

103 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

83

CHAPTER 4

TEXTURE FEATURE EXTRACTION

This chapter deals with various feature extraction technique based

on spatial, transform, edge and boundary, color, shape and texture features. A

brief introduction to these texture features is given first before describing the

gray level co-occurrence matrix based feature extraction technique.

4.1 INTRODUCTION

Image analysis involves investigation of the image data for a

specific application. Normally, the raw data of a set of images is analyzed to

gain insight into what is happening with the images and how they can be used

to extract desired information. In image processing and pattern recognition,

feature extraction is an important step, which is a special form of

dimensionality reduction. When the input data is too large to be processed and

suspected to be redundant then the data is transformed into a reduced set of

feature representations. The process of transforming the input data into a set

of features is called feature extraction. Features often contain information

relative to colour, shape, texture or context.

4.2 TYPES OF FEATURE EXTRACTION

Many techniques have been used to extract features from images.

Some of the commonly used methods are as follows:

Page 2: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

84

Spatial features

Transform features

Edge and boundary features

Colour features

Shape features

Texture features

4.2.1 Spatial Features

Spatial features of an object are characterized by its gray level,

amplitude and spatial distribution. Amplitude is one of the simplest and most

important features of the object. In X-ray images, the amplitude represents the

absorption characteristics of the body masses and enables discrimination of

bones from tissues.

4.2.1.1 Histogram features

The histogram of an image refers to intensity values of pixels. The

histogram shows the number of pixels in an image at each intensity value.

Figure 4.1 shows the histogram of an image and it shows the distribution of

pixels among those grayscale values. The 8-bit gray scale image is having 256

possible intensity values. A narrow histogram indicates the low contrast

region. Some of the common histogram features are mean, variance, energy,

skewness, median and kurtosis are discussed by Myint (2001).

Page 3: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

85

Intensity

Figure 4.1 Histogram of an image

4.2.2 Transform Features

Generally the transformation of an image provides the frequency

domain information of the data. The transform features of an image are

extracted using zonal filtering. This is also called as feature mask, feature

mask being a slit or an aperture. The high frequency components are

commonly used for boundary and edge detection. The angular slits can be

used for orientation detection. Transform feature extraction is also important

when the input data originates in the transform coordinate.

4.2.3 Edge and Boundary Features

Asner and Heidebrecht (2002) discussed edge detection is one of the

most difficult tasks hence it is a fundamental problem in image processing.

Edges in images are areas with strong intensity contrast and a jump in

intensity from one pixel to the next can create major variation in the picture

quality. Edge detection of an image significantly reduces the amount of data

Page 4: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

86

and filters out unimportant information, while preserving the important

properties of an image. Edges are scale-dependent and an edge may contain

other edges, but at a certain scale, an edge still has no width. If the edges in an

image are identified accurately, all the objects are located and their basic

properties such as area, perimeter and shape can be measured easily.

Therefore edges are used for boundary estimation and segmentation in the

scene.

4.2.3.1 Sobel technique

Sobel edge detection technique consists of a pair of 33 convolution

kernels. One kernel is simply the other rotated by 90° as shown in Figure 4.2.

These kernels are designed to respond maximally to edges running vertically

and horizontally relative to the pixel grid of the image, one kernel for each of

the two perpendicular orientations. The kernels can be applied separately to

the input image, to produce separate measurements of the gradient component

in each orientation. These can then be combined together to find the absolute

magnitude of the gradient at each point and the orientation of that gradient.

-1 0 +1 +1 +2 +1

-2 0 +2 0 0 0

-1 0 +1 -1 -2 -1

Gx Gy

Figure 4.2 Masks used for Sobel operator

Page 5: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

87

4.2.3.2 Robert technique

The Robert cross operator performs a simple, quick to compute, 2-D

spatial gradient measurement on an image. Pixel values at each point in the

output represent the estimated absolute magnitude of the spatial gradient of

the input image at that point. The operator consists of a pair of

22 convolution kernels as shown in Figure 4.5. One kernel is simply the

other rotated by 90°. This is very similar to the Sobel operator.

+1 0 0 +1

0 -1 -1 0

Gx Gy

Figure 4.3 Masks used for Robert operator

4.2.3.3 Prewitt technique

Prewitt operator is similar to the Sobel operator and is used for

detecting vertical and horizontal edges in images.

-1 0 +1 +1 0 -1

-1 0 +1 +1 0 -1

-1 0 +1 +1 0 -1

Gx Gy

Figure 4.4 Masks for the Prewitt gradient edge detector

Page 6: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

88

The Prewitt operator measures two components. The vertical edge

component is calculated with kernel xG and the horizontal edge component is

calculated with kernel yG as shown in Figure 4.4. |||| yx GG gives an

indication of the intensity of the gradient in the current pixel.

4.2.3.4 Canny technique

The Canny edge detection algorithm is known popularly as the

optimal edge detector. The Canny algorithm uses an optimal edge detector

based on a set of criteria which include finding the most edges by minimizing

the error rate, marking edges as closely as possible to the actual edges to

maximize localization, and marking edges only once when a single edge

exists for minimal response. According to Canny, the optimal filter that meets

all three criteria that can be efficiently approximated using the first derivative

of a Gaussian function. The first stage involves smoothing the image by

convolving with a Gaussian filter. This is followed by finding the gradient of

the image by feeding the smoothed image through a convolution operation

with the derivative of the Gaussian in both the vertical and horizontal

directions. This process alleviates problems associated with edge

discontinuities by identifying strong edges, and preserving the relevant weak

edges, in addition to maintaining some level of noise suppression.

Figure 4.5 Input landsat image

Page 7: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

89

Figure 4.6 Output of the edge detection techniques

Finally, hysteresis is used as a means of eliminating streaking.

Streaking is the breaking up of an edge contour caused by the operator output

fluctuating above and below the threshold. Figure 4.6 shows the output of the

different edge detection technique of given input image as shown in

Figure 4.5.

4.2.4 Colour Features

Colour is a visual attribute of object things that results from the light

emitted or transmitted or reflected. From a mathematical viewpoint, the

colour signal is an extension from scalar-signals to vector-signals. Colour

features can be derived from a histogram of the image. The weakness of

colour histogram is that the colour histogram of two different things with the

same colour can be equal. Platt and Goetz (2004) discussed colour features

are still useful for many biomedical image processing applications such as

Page 8: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

90

cell classification, cancer cell detection and content-based image retrieval

(CBIR) systems.

In CBIR, every image added to the collection is analyzed to

compute a colour histogram. At search time, the user can either specify the

desired proportion of each colour or submit an example image from which a

colour histogram is calculated. Either way, the matching process then

retrieves those images whose colour histograms match those of the query

most closely.

4.2.5 Shape Features

The shape of an object refers to its physical structure and profile.

Shape features are mostly used for finding and matching shapes, recognizing

objects or making measurement of shapes. Moment, perimeter, area and

orientation are some of the characteristics used for shape feature extraction

technique. The shape of an object is determined by its external boundary

abstracting from other properties such as colour, content and material

composition, as well as from the object's other spatial properties.

4.2.6 Texture Features

Guiying Li (2012) defined texture is a repeated pattern of

information or arrangement of the structure with regular intervals. In a general

sense, texture refers to surface characteristics and appearance of an object

given by the size, shape, density, arrangement, proportion of its elementary

parts. A basic stage to collect such features through texture analysis process is

called as texture feature extraction. Due to the signification of texture

information, texture feature extraction is a key function in various image

processing applications like remote sensing, medical imaging and content-

based image retrieval.

Page 9: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

91

There are four major application domains related to texture analysis

namely texture classification, segmentation, synthesis and shape from texture.

Texture classification produces a classified output of the input image where

each texture region is identified with the texture class it belongs.

Texture segmentation makes a partition of an image into a set of

disjoint regions based on texture properties, so that each region is

homogeneous with respect to certain texture characteristics.

Texture synthesis is a common technique to create large textures

from usually small texture samples, for the use of texture mapping in surface

or scene rendering applications.

The shape from texture reconstructs three dimensional surface

geometry from texture information. For all these techniques, texture

extraction is an inevitable stage. A typical process of texture analysis is

shown in Figure 4.7.

Figure 4.7 Various image analysis steps

Input Image

Pre-processing

Feature extraction

Segmentation, Classification,Synthesis, Shape from texture

Post-processing

Page 10: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

92

4.3 TEXTURE FEATURE EXTRACTION

Neville et al (2003) discussed texture features can be extracted using

several methods such as statistical, structural, model-based and transform

information.

4.3.1 Structural based Feature Extraction

Structural approaches represent texture by well defined primitives

and a hierarchy of spatial arrangements of those primitives. The description of

the texture needs the primitive definition. The advantage of the structural

method based feature extraction is that it provides a good symbolic

description of the image; however, this feature is more useful for image

synthesis than analysis tasks. This method is not appropriate for natural

textures because of the variability of micro-texture and macro-texture.

4.3.2 Statistical based Feature Extraction

Statistical methods characterize the texture indirectly according to

the non-deterministic properties that manage the relationships between the

gray levels of an image. Statistical methods are used to analyze the spatial

distribution of gray values by computing local features at each point in the

image and deriving a set of statistics from the distributions of the local

features. The statistical methods can be classified into first order (one pixel),

second order (pair of pixels) and higher order (three or more pixels) statistics.

The first order statistics estimate properties (e.g. average and variance) of

individual pixel values by waiving the spatial interaction between image

pixels. The second order and higher order statistics estimate properties of two

or more pixel values occurring at specific locations relative to each other. The

most popular second order statistical features for texture analysis are derived

Page 11: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

93

from the co-occurrence matrix. Statistical based texture features will be

discussed in section 4.4.

4.3.3 Model based Feature Extraction

Model based texture analysis such as fractal model and Markov

model are based on the structure of an image that can be used for describing

texture and synthesizing it. These methods describe an image as a probability

model or as a linear combination of a set of basic functions. The Fractal

model is useful for modeling certain natural textures that have a statistical

quality of roughness at different scales and self similarity, and also for texture

analysis and discrimination.

There are different types of models based feature extraction

technique depending on the neighbourhood system and noise sources. The

different types are one-dimensional time-series models, Auto Regressive

(AR), Moving Average (MA) and Auto Regressive Moving Average

(ARMA). Random field models analyze spatial variations in two dimensions.

Global random field models treat the entire image as a realization of a random

field, and local random field models assume relationships of intensities in

small neighbourhoods. Widely used class of local random field models are

Markov models, where the conditional probability of the intensity of a given

pixel depends only on the intensities of the pixels in its neighbourhood.

4.3.4 Transform based Feature Extraction

Transform methods, such as Fourier, Gabor and wavelet transforms

represent an image in space whose co-ordinate system has an interpretation

that is closely related to the characteristics of a texture. Methods based on

Fourier transforms have a weakness in a spatial localization so these do not

perform well. Gabor filters provide means for better spatial localization but

Page 12: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

94

their usefulness is limited in practice because there is usually no single filter

resolution where one can localize a spatial structure in natural textures. These

methods involve transforming original images by using filters and calculating

the energy of the transformed images. These are based on the process of the

whole image that is not good for some applications which are based on one

part of the input image.

4.4 STATISTICAL BASED FEATURES

The three different types of statistical based features are first order

statistics, second order statistics and higher order statistics as shown in Figure 4.8.

Figure 4.8 Statistical based features

4.4.1 First Order Histogram based Features

First Order histogram provides different statistical properties such as

four statistical moments of the intensity histogram of an image. These depend

only on individual pixel values and not on the interaction or co-occurrence of

neighbouring pixel values. The four first order histogram statistics are mean,

variance, skewness and kurtosis.

Statistical based features

Second orderStatistics

Higher orderStatistics

First orderStatistics

Page 13: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

95

A histogram h for a gray scale image I with intensity values in the

range 1,0),( KyxI would contain exactly K entries, where for a typical 8-bit

grayscale image, 25628K . Each individual histogram entry is defined as,

)(ih = the number of pixels in I with the intensity value I for all

Ki0 . The Equation (4.1) defines the histogram as,

iyxIyxycardinalitih ),(|),()( (4.1)

where, cardinality denotes the number of elements in a set. The standard

deviation, and skewness of the intensity histogram are defined in

Equation (4.2) and (4.3).

NmyxI 2)),((

(4.2)

3

3

N)m)y,x(I(

skewness (4.3)

4.4.2 Second Order Gray Level Co-occurrence Matrix Features

Some previous research works compared texture analysis methods;

Dulyakarn et al. (2000) compared each texture image from GLCM and

Fourier spectra, in the classification. Maillard (2003) performed comparison

works bewteen GLCM, semi-variogram, and Fourier spectra at the same

purpose. Bharati et al. (2004) studied comparison work of GLCM, wavelet

texture analysis, and multivariate statistical analysis based on PCA (Principle

Component Analysis). In those works, GLCM is suggested as the effective

texture analysis schemes. Monika Sharma et al (2012) discussed GLCM is

applicable for different texture feature analysis.

Page 14: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

96

The GLCM is a well-established statistical device for extracting

second order texture information from images. A GLCM is a matrix where

the number of rows and columns is equal to the number of distinct gray levels

or pixel values in the image of that surface. GLCM is a matrix that describes

the frequency of one gray level appearing in a specified spatial linear

relationship with another gray level within the area of investigation. Given an

image, each with an intensity, the GLCM is a tabulation of how often

different combinations of gray levels co-occur in an image or image section.

Texture feature calculations use the contents of the GLCM to give a

measure of the variation in intensity at the pixel of interest. Typically, the co-

occurrence matrix is computed based on two parameters, which are the

relative distance between the pixel pair d measured in pixel number and their

relative orientation . Normally, is quantized in four directions (e.g., 0º,

45 º, 90 º and 135 º), even though various other combinations could be

possible.

GLCM has fourteen features but between them most useful features

are: angular second moment (ASM), contrast, correlation, inverse difference

moment, sum entropy and information measures of correlation. These features

are thoroughly promising.

4.4.3 Gray Level Run Length Matrix Features

Petrou et al (2006) defined gray level run length matrix (GLRLM) is

the number of runs with pixels of gray level i and run length j for a given

direction. GLRLM generate for each sample of image fragment. A set of

consecutive pixels with the same gray level is called a gray level run. The

number of pixels in a run is the run length. In order to extract texture features

gray level run length matrix are computed. For each element, ),( ji the run

length, r of the GLRLM represents the number of runs of gray level i having

Page 15: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

97

length j . GLRLM can be computed for any direction. Mostly five features are

derived from the GLRLM. These features are: Short Runs Emphasis (SRE),

Long Runs Emphasis (LRE), Gray Level Non-Uniformity (GLNU), Run

Length Non-Uniformity (RLNU), and Run Percentage (RPERC). These are

quite improved in representing binary textures.

4.4.4 Local Binary Pattern Features

Local binary pattern (LBP) operator is introduced as a

complementary measure for local image contrast. Lahdenoja (2005) discussed

the LBP operator associate statistical and structural texture analysis. The LBP

describes texture with smallest primitives called textons (or, histograms of

texture elements). For each pixel in an image, a binary code is produced by

thresholding, its neighbourhood with the value of the center pixel. A

histogram is then assembled to collect the occurrences of different binary

codes representing different types of curved edges, spots, flat areas, etc.

This histogram is an arrangement as the feature vector result of

applying the LBP operator. The LBP operator considers only the eight nearest

neighbours of each pixel and it is rotation variant, but invariant to monotonic

changes in gray-scale can be applied. The dimensionality of the LBP feature

distribution can be calculated according to the number of neighbours used.

LBP is one of the most used approaches in practical applications, as it has the

advantage of simple implementation and fast performance.

Some related features are Scale-Invariant Feature Transform (SIFT)

descriptor (SIFT is a distinctive invariant feature set that is suitable for

describing local textures), LPQ (Local Phase Quantization) operator, Center-

Symmetric LBP (CS-LBP) and Volume-LBP.

Page 16: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

98

4.4.5 Auto Correlation Features

An important characteristic of texture is the repetitive nature of the

position of texture elements in the image. An autocorrelation function can be

evaluated that measures this coarseness. Based on the observation of

autocorrelation feature is computed that some textures are repetitive in nature,

such as textiles. The autocorrelation feature of an image is used to evaluate

the fineness or roughness of the texture present in the image. This function is

related to the size of the texture primitive for example the fitness of the

texture. If the texture is rough or unsmooth, then the autocorrelation function

will go down slowly, if not it will go down very quickly. For normal textures,

the autocorrelation function will show peaks and valleys. It has relationship

with power spectrum of the fourier transform. It is also responsive to noise

interference. The autocorrelation function of an image ),( yxI is defined in

Equation (4.4) as follows

N

u

N

v

N

u

N

v

vuI

yvxuIvuIyxP

0 0

2

0 0

),(

),(),(),( (4.4)

4.4.6 Co-occurrence Matrix – SGLD

Statistical methods use second order statistics to model the

relationships between pixels within the region by constructing Spatial Gray

Level Dependency (SGLD) matrices. A SGLD matrix is the joint probability

occurrence of gray levels i and j for two pixels with a defined spatial

relationship in an image. The spatial relationship is defined in terms of

distance, d and angle, . If the texture is coarse and distance d is small

compared to the size of the texture elements, the pairs of points at distance d

should have similar gray levels. Conversely, for a fine texture, if distance d is

Page 17: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

99

comparable to the texture size, then the gray levels of points separated by

distance d should often be quite different, so that the values in the SGLD

matrix should be spread out relatively uniformly.

Hence, one of the ways to analyze texture coarseness would be, for

various values of distance d , some measure of scatter of the SGLD matrix

around the main diagonal. Similarly, if the texture has some direction, i.e., is

coarser in one direction than another, then the degree of spread of the values

about the main diagonal in the SGLD matrix should vary with the direction .

Thus texture directionality can be analyzed by comparing spread measures of

SGLD matrices constructed at various distances of d . From SGLD matrices, a

variety of features may be extracted.

From each matrix, 14 statistical measures are extracted including:

angular second moment, contrast, correlation, variance, inverse difference

moment, sum average, sum variance, sum entropy, difference variance,

difference entropy, information measure of correlation, information measure

of correlation II and maximal correlation coefficient. The measurements

average the feature values in all four directions.

4.4.7 Edge Frequency based Texture Features

A number of edge detectors can be used to yield an edge image from

an original image. An edge dependent texture description function E can be

computed using Equation (4.5) as follows

|),(),(||),(),(||),(),(||),(),(|

djifjifdjifjifjdifjifjdifjifE

(4.5)

This function is inversely related to the autocorrelation function.

Texture features can be evaluated by choosing specified distances d. It varies

the distance, d , parameter from 1 to 70 giving a total of 70 features.

Page 18: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

100

4.4.8 Primitive Length Texture Features

Coarse textures are represented by a large number of neighbouring

pixels with the same gray level, whereas a small number represents fine

texture. A primitive is a continuous set of maximum number of pixels in the

same direction that have the same gray level. Each primitive is defined by its

gray level, length and direction. Let ),( raB represents the number of

primitives of all directions having length r and gray level a . Assume NM , be

image dimensions, L is the number of gray levels, rN is the maximum

primitive length in the images and K is the total number of runs. It is given

by the Equation (4.6) as

L

a

N

r

r

raB1 1

),( (4.6)

Then, the Equations (4.6) – (4.10) define the five features of image

texture.

Short primitive emphasis =L

a

N

r

r

rraB

K 1 12

),(1 (4.7)

Long primitive emphasis =L

a

N

r

r

raBK 1 1

2),(1 (4.8)

Gray level uniformity =2

1 1

2),(1 L

a

N

r

r

rraBK

(4.9)

Primitive length uniformity =L

a

N

r

r

raBK 1

2

1

),(1 (4.10)

Primitive percentage =MNK

rarB

KL

a

N

r

r

1 1

),( (4.11)

Page 19: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

101

4.4.9 Law’s Texture Features

Law’s of texture observed that certain gradient operators such as

Laplacian and Sobel operators accentuated the underlying microstructure of

texture within an image. This was the basis for a feature extraction scheme

based a series of pixel impulse response arrays obtained from combinations of

1-D vectors shown in Figure 4.9. Each 1-D array is associated with an

underlying microstructure and labeled using an acronym accordingly. The

arrays are convolved with other arrays in a combinatorial manner to generate

a total of 25 masks, typically labeled as L5, E5, S5, W5 and R5 for the mask

resulting from the convolution of the two arrays.

]14641[5]12021[5]10201[5]12021[5]14641[5

RRippleWWaveSSpotEEdgeLLevel

Figure 4.9 Five 1D arrays identified by laws

These masks are subsequently convolved with a texture field to

accentuate its microstructure giving an image from which the energy of the

microstructure arrays is measured together with other statistics. The

commonly used features are mean, standard deviation, skewness, kurtosis and

energy measurements. Since there are 25 different convolutions, altogether it

obtains a total of 125 features.

For all feature extraction methods, the most appropriate features are

selected for classification using a linear stepwise discriminant analysis.

Among the above mentioned techniques, researchers suggested the

GLCM is one of the very best feature extraction techniques. From GLCM,

Page 20: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

102

many useful textural properties can be calculated to expose details about the

image. However, the calculation of GLCM is very computationally intensive

and time consuming.

4.5 GRAY LEVEL CO-OCCURRENCE MATRIX

In 1973, Haralick introduced the co-occurrence matrix and texture

features which are the most popular second order statistical features today.

Haralick proposed two steps for texture feature extraction. First step is

computing the co-occurrence matrix and the second step is calculating texture

feature based on the co-occurrence matrix. This technique is useful in wide

range of image analysis applications from biomedical to remote sensing

techniques.

4.5.1 Working of GLCM

Basic of GLCM texture considers the relation between two

neighbouring pixels in one offset, as the second order texture. The gray value

relationships in a target are transformed into the co-occurrence matrix space

by a given kernel mask such as 33 , 55 , 77 and so forth. In the

transformation from the image space into the co-occurrence matrix space, the

neighbouring pixels in one or some of the eight defined directions can be

used; normally, four direction such as 0°, 45°, 90°, and 135° is initially

regarded, and its reverse direction (negative direction) can be also counted

into account. It contains information about the positions of the pixels having

similar gray level values.

Each element ),( ji in GLCM specifies the number of times that the

pixel with value i occurred horizontally adjacent to a pixel with value j . In

Figure 4.8, computation has been made in the manner where, element (1, 1) in

the GLCM contains the value 1 because there is only one instance in the

Page 21: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

103

image where two, horizontally adjacent pixels have the values 1 and 1.

Element (1, 2) in the GLCM contains the value 2 because there are two

instances in the image where two, horizontally adjacent pixels have the values

1 and 2.

Figure 4.10 Creation of GLCM from image matrix

Element (1, 2) in the GLCM contains the value 2 because there are

two instances in the image where two, horizontally adjacent pixels have the

values 1 and 2. The GLCM matrix has been extracted for input dataset

imagery. Once after the GLCM is computed, texture features of the image are

being extracted successively.

4.6 HARALICK TEXTURE FEATURES

Haralick extracted thirteen texture features from GLCM for an

image. The important texture features for classifying the image into water

body and non-water body are Energy (E), Entropy (Ent), Contrast (Con),

Inverse Difference Moment (IDM) and Directional Moment (DM).

Page 22: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

104

Andrea Baraldi and Flavio Parmiggiani (1995) discussed the five

statistical parameter energy, entropy, contrast, IDM and DM, which are

considered the most relevant among the 14 originally texture features

proposed by Haralick et al. (1973). The complexity of the algorithm also

reduced by using these texture features.

Let i and j are the coefficients of co-occurrence matrix, jiM , is

the element in the co-occurrence matrix at the coordinates i and j and N is

the dimension of the co-occurrence matrix.

4.6.1 Energy

Energy (E) can be defined as the measure of the extent of pixel pair

repetitions. It measures the uniformity of an image. When pixels are very

similar, the energy value will be large. It is defined in Equation (4.12) as

jiMEN

oj

N

i,2

11

0 (4.12)

4.6.2 Entropy

This concept comes from thermodynamics. Entropy (Ent) is the

measure of randomness that is used to characterize the texture of the input

image. Its value will be maximum when all the elements of the co-occurrence

matrix are the same. It is also defined as in Equation (4.13) as

))),(ln((,11

0jiMjiMEnt

N

oj

N

i (4.13)

4.6.3 Contrast

The contrast (Con) is defined in Equation (4.14), is a measure of

intensity of a pixel and its neighbour over the image. In the visual perception

Page 23: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

105

of the real world, contrast is determined by the difference in the colour and

brightness of the object and other objects within the same field of view.

jiMjiConN

oj

N

i,2

11

0(4.14)

4.6.4 Inverse Difference Moment

Inverse Difference Moment (IDM) is a measure of image texture as

defined in Equation (4.15). IDM is usually called homogeneity that measures the

local homogeneity of an image. IDM feature obtains the measures of the closeness

of the distribution of the GLCM elements to the GLCM diagonal. IDM has a range

of values so as to determine whether the image is textured or non-textured.

jiMji

IDMN

oj

N

i,

11

2

11

0

(4.15)

4.6.5 Directional Moment

Directional moment (DM), as the name signifies, this is a textural

property of the image computed by considering the alignment of the image as

a measure in terms of the angle and it is defined as in Equation (4.16)

jijiMDMN

oj

N

i,

11

0

(4.16)

The Table 4.1 shows some of the texture features extracted using

GLCM, to classify an image into water body and non-water body region.

Page 24: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

106

Table 4.1 Texture features extracted using GLCM

Energy Entropy Contrast IDM DM0.2398 6.8042 0.124 6.15E+04 237.24820.1949 7.0086 0.1904 6.01E+04 324.56620.3168 6.4868 0.2488 5.99E+04 389.97330.1524 6.7707 0.2025 5.95E+04 334.97380.7568 5.5702 0.12 6.28E+04 272.96140.1655 7.3033 0.1999 5.93E+04 286.02150.313 6.6852 0.1464 6.12E+04 270.96970.2236 6.9529 0.1739 6.10E+04 289.11590.5483 5.8905 0.1019 6.26E+04 229.4640.5583 5.9409 0.1524 6.26E+04 253.86710.5143 6.1439 0.0794 6.31E+04 197.93850.2486 6.6115 0.1654 6.06E+04 261.91780.1608 6.88 0.1993 6.05E+04 302.57520.4855 5.9474 0.0953 6.28E+04 211.55940.1613 6.9496 0.1639 6.07E+04 283.4330.2853 6.4627 0.2106 6.01E+04 339.55050.1477 7.0368 0.3293 5.73E+04 444.92420.316 5.9372 0.1803 6.03E+04 281.77550.3046 6.4706 0.1998 6.03E+04 339.11790.2796 6.4406 0.2019 6.02E+04 322.49840.573 6.0185 0.1416 6.21E+04 288.62230.1729 7.2134 0.1497 6.05E+04 234.17170.3145 6.804 0.1592 6.09E+04 274.10240.7637 5.3457 0.0753 6.35E+04 184.0610.6113 5.8042 0.138 6.19E+04 264.55340.7586 5.3523 0.0594 6.37E+04 164.26140.3124 6.2919 0.1397 6.15E+04 281.96660.5817 6.0175 0.1585 6.14E+04 290.48720.1226 7.1201 0.2642 5.73E+04 337.05970.1993 7.2553 0.2249 5.89E+04 365.62910.7293 5.5209 0.1787 6.16E+04 279.5780.5257 6.4206 0.091 6.30E+04 211.61740.3006 6.4985 0.1749 6.10E+04 318.49370.1576 6.8883 0.1738 6.04E+04 291.30210.1929 7.1205 0.2127 5.86E+04 294.55260.1727 7.1763 0.2284 5.80E+04 277.68380.8759 5.0943 0.0371 6.46E+04 137.26380.285 6.7064 0.1587 6.09E+04 300.29480.1382 7.4154 0.19 5.94E+04 287.76320.3316 6.7746 0.1325 6.16E+04 268.2331

Page 25: CHAPTER 4 TEXTURE FEATURE EXTRACTION - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/24460/9/09_chapter4.pdf · CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with

107

4.7 APPLICATION OF TEXTURE

Texture analysis methods have been utilized in a variety of

application domains such as automated inspection, medical image processing,

document processing, remote sensing and content-based image retrieval.

4.7.1 Remote Sensing

Texture analysis has been extensively used to classify remotely

sensed images. Land use classification where homogeneous regions with

different types of terrains (such as wheat, bodies of water, urban regions, etc.)

need to be identified is an important application.

4.7.2 Medical Image Analysis

Image analysis techniques have played an important role in several

medical applications. In general, the applications involve the automatic

extraction of features from the image which is then used for a variety of

classification tasks, such as distinguishing normal tissue from abnormal tissue.

Depending upon the particular classification task, the extracted features

capture morphological properties, colour properties, or certain textural

properties of the image.

4.8 SUMMARY

This chapter detailed the gray level co-occurrence matrix based

feature extraction to obtain energy, entropy, contrast, inverse difference

moment and directional moment. These texture features are served as the

input to classify the image accurately. Effective use of multiple features of the

image and the selection of a suitable classification method are especially

significant for improving classification accuracy. The chapter 5 discusses

classification techniques for improving accuracy along with their applications.