glcm-based metric for image fusion assessment zaid omar...

6
GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT Zaid Omar and Tania Stathaki Communications and Signal Processing Group, Imperial College London, South Kensington, SW7 2AZ United Kingdom ABSTRACT This paper introduces a novel metric for image fusion evaluation that is based on texture. Among the applications of image fusion are surveillance and remote sensing, where the combination of relevant information from multiple image sources are preserved in a fused output. From these, the conservation of background textural details is especially important as they help to define the image structure. The concept is brought forth in our work, which aims to evaluate the performance of image fusion algorithms based on their ability to retain textural details from the fusion process. We utilise the GLCM model to extract second-order statistical features for the derivation of an image textural measure. This is used to replace the edge-based calculations in the Petrovic metric. Performance evaluation on es- tablished fusion methods verify that the proposed metric is accurate, especially for multimodal scenarios. Index TermsImage fusion, texture measure, fusion metric. 1. INTRODUCTION Since its advent in the nineties, advancements in image fusion for critical applications such as remote sensing, medical imaging and millitary surveillance has given rise to the development of several fusion metrics to assess their performance [1]. Image fusion is the process of merging salient information from a multitude of source images to produce a fused output that is of higher quality, which can then be used for further processing or higher level decision mak- ing. There are many advantages of image fusion: preservation of important content whilst reducing the amount of data, ensuring a higher quality output and enabling users to visualise simultaneously the different sourced data at hand. For instance in remote sensing, rather than designing an expensive image acquisition system a viable fused image can be produced by signal processing techniques [2]. A host of image fusion algorithms have been introduced in literature, most notably those based on multi-resolution analysis (MRA) such as the Wavelet transform (WT) [1], and other signal decomposition methods like Empirical Mode Decomposition (EMD) [1] and blind processing such as Independent Component Analysis (ICA) [3]. Early fusion systems rely on visual inspection and perceptual evaluation to assess their performance [4]. In other words, a good image is ultimately determined by the user. Whilst humans still re- main the best adjudicator of visual quality, their method of assess- ment is unfeasible for real-time systems that require fast processing of data. As such, signal processing methods are used to develop a This research was made possible by the fundings of Universiti Teknologi Malaysia and the UDRC. fusion quality metric that processes images in real time and corre- lates well with human evaluation in general. In recent times there have been considerable interest in fusion metric schemes, based on various measures. A notable lack of ‘ground truth’ images in fusion applications means that reference based techniques such as signal- to-noise ratio (SNR), peak-signal-to-noise ratio (PSNR) and mean square error (MSE) are rendered unsuitable. These however have been replaced by the emergence of methods that emphasise the rela- tionship and influence of each input image towards the fused output. Qu et al. [5] demonstrated a metric that utilises mutual information and entropy. Petrovic and Xydeas [6] exploited edge information for fusion evaluation purposes. Further, Piella [7] developed a novel metric that is based on the Universal Index (UI) proposed by Wang and Bovik [8]. There has not yet been a standard approach to formulating a ca- pable metric for image fusion. As it is, a given metric is not nec- essarily competent for all image scenarios due to the fact that image fusion is a very application-specific field. A multimodal scene for in- stance may generate vastly different results from a multifocal scene despite using the same fusion algorithm. The task therefore is to ac- commodate the fusion metric to each fusion scenario. In general, a good fusion metric requires the following:- (a) Has no prerequisites for ground truth. This is highly relevant as most image fusion applications, such as night vision and hyper- spectral fusion, are performed without the knowledge of ground truth. (b) Has good correlation with the human visual system (HVS) and perceptual evaluation. As HVS is extremely sensitive to edges, a good metric should also be able to maximise this property. (c) Is quantitative and based on a pre-determined, bounded score. This helps to standardise the performance of a fusion algorithm when applied on a wide range of image types. (d) Has the ability to maximise good results and minimise bad re- sults, and has a stronger separation between the two. (e) The focus of a fusion metric is a standardised score rather than flexibility or optimisation. As such, it requires minimal para- metric input. The input values should only consist of source images and the fused image. (f) Easy to calculate and fast to process. It has been noted that in critical surveillance, military and re- mote sensing applications the detection and preservation of natural imagery is vital. These include grass, rocks, bushes, pebbles, earth and other entities which make up the background scenery. While their textural structure may be insignificant (compared to the pri- mary objects of interest such as human presence) it is important to 376

Upload: others

Post on 13-Jun-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT Zaid Omar …fusion.isif.org/proceedings/fusion12CD/html/pdf/051_31.pdf · 2014-10-02 · GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT

GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT

Zaid Omar and Tania Stathaki

Communications and Signal Processing Group,Imperial College London,

South Kensington,SW7 2AZ United Kingdom

ABSTRACT

This paper introduces a novel metric for image fusion evaluation thatis based on texture. Among the applications of image fusion aresurveillance and remote sensing, where the combination of relevantinformation from multiple image sources are preserved in a fusedoutput. From these, the conservation of background textural detailsis especially important as they help to define the image structure.The concept is brought forth in our work, which aims to evaluatethe performance of image fusion algorithms based on their ability toretain textural details from the fusion process. We utilise the GLCMmodel to extract second-order statistical features for the derivationof an image textural measure. This is used to replace the edge-basedcalculations in the Petrovic metric. Performance evaluation on es-tablished fusion methods verify that the proposed metric is accurate,especially for multimodal scenarios.

Index Terms— Image fusion, texture measure, fusion metric.

1. INTRODUCTION

Since its advent in the nineties, advancements in image fusionfor critical applications such as remote sensing, medical imaging andmillitary surveillance has given rise to the development of severalfusion metrics to assess their performance [1]. Image fusion is theprocess of merging salient information from a multitude of sourceimages to produce a fused output that is of higher quality, whichcan then be used for further processing or higher level decision mak-ing. There are many advantages of image fusion: preservation ofimportant content whilst reducing the amount of data, ensuring ahigher quality output and enabling users to visualise simultaneouslythe different sourced data at hand. For instance in remote sensing,rather than designing an expensive image acquisition system a viablefused image can be produced by signal processing techniques [2]. Ahost of image fusion algorithms have been introduced in literature,most notably those based on multi-resolution analysis (MRA) suchas the Wavelet transform (WT) [1], and other signal decompositionmethods like Empirical Mode Decomposition (EMD) [1] and blindprocessing such as Independent Component Analysis (ICA) [3].

Early fusion systems rely on visual inspection and perceptualevaluation to assess their performance [4]. In other words, a goodimage is ultimately determined by the user. Whilst humans still re-main the best adjudicator of visual quality, their method of assess-ment is unfeasible for real-time systems that require fast processingof data. As such, signal processing methods are used to develop a

This research was made possible by the fundings of Universiti TeknologiMalaysia and the UDRC.

fusion quality metric that processes images in real time and corre-lates well with human evaluation in general. In recent times therehave been considerable interest in fusion metric schemes, based onvarious measures. A notable lack of ‘ground truth’ images in fusionapplications means that reference based techniques such as signal-to-noise ratio (SNR), peak-signal-to-noise ratio (PSNR) and meansquare error (MSE) are rendered unsuitable. These however havebeen replaced by the emergence of methods that emphasise the rela-tionship and influence of each input image towards the fused output.Qu et al. [5] demonstrated a metric that utilises mutual informationand entropy. Petrovic and Xydeas [6] exploited edge informationfor fusion evaluation purposes. Further, Piella [7] developed a novelmetric that is based on the Universal Index (UI) proposed by Wangand Bovik [8].

There has not yet been a standard approach to formulating a ca-pable metric for image fusion. As it is, a given metric is not nec-essarily competent for all image scenarios due to the fact that imagefusion is a very application-specific field. A multimodal scene for in-stance may generate vastly different results from a multifocal scenedespite using the same fusion algorithm. The task therefore is to ac-commodate the fusion metric to each fusion scenario. In general, agood fusion metric requires the following:-

(a) Has no prerequisites for ground truth. This is highly relevant asmost image fusion applications, such as night vision and hyper-spectral fusion, are performed without the knowledge of groundtruth.

(b) Has good correlation with the human visual system (HVS) andperceptual evaluation. As HVS is extremely sensitive to edges,a good metric should also be able to maximise this property.

(c) Is quantitative and based on a pre-determined, bounded score.This helps to standardise the performance of a fusion algorithmwhen applied on a wide range of image types.

(d) Has the ability to maximise good results and minimise bad re-sults, and has a stronger separation between the two.

(e) The focus of a fusion metric is a standardised score rather thanflexibility or optimisation. As such, it requires minimal para-metric input. The input values should only consist of sourceimages and the fused image.

(f) Easy to calculate and fast to process.

It has been noted that in critical surveillance, military and re-mote sensing applications the detection and preservation of naturalimagery is vital. These include grass, rocks, bushes, pebbles, earthand other entities which make up the background scenery. Whiletheir textural structure may be insignificant (compared to the pri-mary objects of interest such as human presence) it is important to

376

Page 2: GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT Zaid Omar …fusion.isif.org/proceedings/fusion12CD/html/pdf/051_31.pdf · 2014-10-02 · GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT

(a) (b)

Fig. 1: Example of texture (a) Visual image and (b) NIR image

preserve these details to ensure quality for further processing. Thepresence of texture adds depth and structure which allows us definea better description of an image or scenario. An example is given inFigure 1.

The two images depict the same scenario using different imageacquisition methods. The visual camera features very rich texture, inaddition to edges and other salient features. The near-infrared (NIR)image has almost zero texture in a sense that the main objects are dis-played as plain and monotonous (and intra-object intensity variationis almost zero), though edges are intact and crucially human pres-ence is detected. The objective therefore is to fuse the two imagesso that the respective object textures are preserved, but the human isalso detected. A theoretically ideal fusion would be able to extractonly the human from NIR and ‘paste’ it onto the detailed backgroundof the visual camera so as to preserve the image texture.

In this paper we attempt to introduce a novel method for as-sessing image fusion quality. The new metric is based on texturalrepresentations, which we believe possess an advantage over currentmetrics [9, 10]. It rewards fusion schemes that best preserves tex-tural content which therefore may be crucial in the surveillance ofnatural terrains.

It is important here to distinguish texture from edges. Textureis defined as the variation of data at scales smaller than those ofinterest [11]. A popular portrayal is that texture is typically associ-ated with ‘patterns’ while edges and boundaries are not (they are de-fined using local constraints and seldom by global re-occurrences).Edges and boundaries are typically formed by high frequency con-tents while texture can be associated with both higher and lower endof the spectrum. This is true in most cases; however, in surveillanceand sensor applications which form the focus of our paper, the notionof texture differs somewhat. We are less concerned with texture clas-sification based on shape or pattern than determining the presence oftexture over edges i.e. texture segmentation, rather than classifica-tion or recognition. This is appropriate especially as edges denotea boundary between two different layered objects whereas texturerecords shade variation or ‘coarseness’ within the same object [12].

The paper is divided into five sections. The first introduces theconcept of image fusion performance and measure. Section twodiscusses incumbent methods for fusion metric, their strengths andlimitations. The third section focuses on the proposed method, in-cluding an analysis of texture representation. Section four evaluatesthe performance of our method in comparison with current fusionmetrics on various image examples. Discussions and conclusion aregiven in section five.

2. IMAGE FUSION METRIC

The need for a fusion metric has been widely acknowledged. It en-ables an efficient and impartial comparison between different fusionmethods. Moreover, it allows users to optimise fusion algorithms forthe best performance according to their particular needs. Theoreti-cally, evaluating the performance of fusion is akin to measuring thedegree of image degradation or enhancement. This is what is calleda degradation model [13]. A modified image, considered an output,is directly measured using a specified metric against the original im-age or ‘ground truth’. As such, common image quality assessment(IQA) methods such as [8, 27] can be used. The obvious differenceis that fusion involves comparing an output image with multiple in-put images, rather than the one-to-one comparison as normally usedin IQA schemes. An essential issue of fusion therefore is to measurethe amount of contribution (or activity level measurement [14]) fromeach input. In such circumstances, measures like SNR and standarddeviation are considered impractical. A good fusion metric shouldbe able to correctly estimate the respective contributions regardlessof their scenery, modality or type.

In [7] Piella has extended the UI on fused images by including aweighted average of input images to produce a fusion quality indexQ(a, b, f). Weights are calculated based on saliency which may bedefined as contrast, sharpness or entropy. Cvejic et al. [15] furtherdefined saliency as the covariance between input and output images.The choice of covariance as a saliency measure is suitable as it con-curs with the fourth objective above.

A measure for image fusion was proposed in [5] in which mutualinformation (MI) is utilised as the basis. Ranjith and Ramesh [16]expanded on the concept of MI by introducing Fusion Factor (FF)and Fusion Symmetry (FS) to quantify image quality. MI measuresthe statistical dependence of two random variables, defined by theKullback-Leibler distance. However the measure makes no assump-tions regarding the relation between both input modalities [5]. Fur-ther, MI treats an image as a global entity and attributes to it a singlescore, without taking into account individual pixel intensities and re-gional structures. Another drawback of MI is its unbounded score,which may contribute to MI’s inconsistent performance in recenttests [17].

Another popular fusion measure is the Petrovic metric [6], whichemploys the Sobel operator to measure the edge details transferredbetween input and output images. Compared to MI, Petrovic metrichas a normalised score and processes each pixel individually. Theexact weights and constants used in this method can be adaptivelymodified so as to increase its robustness. In [18] the metric wasfurther developed to incorporate segmentation to identify importantregions for fusion.

Other metrics that are found in fusion literature include [19]based on the Quantitative Correlation Analysis (QCA) which canbe applied to scenarios involving a large number of source images,such as hyperspectral fusion. An improved version was proposedin [20] that analyses the general (including non-linear) relationshipbetween source and output images.

We noted from the above schemes, a lack of depth in the study oftexture features as the basis for a fusion metric. The closest is possi-bly the Petrovic metric which calculates the edge information trans-ferred between input and output images. This is insufficient how-ever as a specific measurement of ‘texturedness’, rather than simplyedges, is desired.

Hence in this paper we aim to maximise the textural contentwithin images to develop a novel metric for image fusion assess-ment. The metric intends to combine textural quality (thus rewarding

377

Page 3: GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT Zaid Omar …fusion.isif.org/proceedings/fusion12CD/html/pdf/051_31.pdf · 2014-10-02 · GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT

Fig. 2: GLCM

source images with rich texture) with the information degradationmodel, which measures the perceptual loss of information from in-put images into the fused output. The approach is based on GLCM,whereby grayscale transitions of neighbouring pixels and edge de-tails are stored. Overall we believe the metric would be feasible foruse in surveillance operations that involves a multitude of sensorswhich require the fusion process.

3. GRAY-LEVEL CO-OCCURRENCE MATRIX

Gray-level co-occurrence matrix (GLCM) has proven to be a veryeffective tool for texture analysis [21, 22]. It defines the frequencyof a pixel intensity appearing in a specified spatial linear relationshipwith another pixel intensity within a region. It is generally based ontwo parameters - distance d and orientation φ. d refers to the spa-tial distance between a reference pixel and its neighbour, whereas φis quantised in four directions namely 0◦ (horizontal), 45◦ (diago-nal), 90◦ (vertical) and 135◦ (inverted diagonal), with their respec-tive symmetrical counterparts. In practice the average of all fourorientations are normally used.

For an image I of size [X,Y ], letm represent the intensity valueof pixel (x, y) and n represent the intensity of pixel (x ± dφ1, y ±dφ2). L is the total number of gray levels for I , 0 ≤ m,n ≤ L− 1whereas 0 ≤ x ≤ X − 1 and 0 ≤ y ≤ Y − 1. The GLCM Cm,n isdefined as

Cm,n =

X−1∑x=0

Y−1∑y=0

P {I(x, y) = m&I(x± dφ1, y ± dφ2) = n}

(1)where P{.} = 1 if the argument remains true and 0 if false. In

other words, Equation 1 counts the total number of occurrence forthe pixel pair (m,n) over the relative distance d and in a specifieddirection φ. For instance, consider the pixel block and its GLCMequivalent in Figure 2, with d = 1, φ = 0◦, image size [4, 4] andL = 4. Cm,n(1, 2) = 1 which means pixel pair 0 − 1 occurs oncein I . Due to the symmetrical property, Cm,n(2, 1) = 1 also.

The concept of GLCM was first proposed by Haralick et. al in[23]. The paper describes fourteen features related to texture andfrom those, six are the most relevant: energy, entropy, contrast, vari-ance, correlation and inverse difference moment. Each feature anal-yses the textural representations in the image; for example, contrastmeasures the intensity contrast and rewards high variations (strongedges) between neighbouring pixels. A detailed description of themain features can be found in [24]. The advantage of GLCM liesnot only in the multitude of available textural features but also its

Fig. 3: Texture definition in GLCM

flexibility in allowing new features to be derived, as was done in[25].

4. TEXTURE ENHANCEMENT FEATURE FOR GLCM

As GLCM is symmetrical along its diagonal we will only considerthe upper pyramid. GLCM defines the middle diagonal as the pixel-pairs of the same value (i.e. no variance). The adjacent diagonalconsists of pixel-pairs of one intensity variance (e.g. 127 and 128).The following diagonal is those with two intensity variance, and soon until the last diagonal where the variance is maximum (e.g. 0 and255). The first diagonal is attributed to regions containing uniformtexture; the edge magnitude increments marginally throughout eachdiagonal until the top-right element, which consists of very strongedges.

Our premise is based on studies on IQA based on a three-component image model [26] which segmentise the different levelsof edge into strong edges, texture and weak edges, and plain region.The thresholds for each category are given as T1 = 0.06Gmax andT2 = 0.12Gmax where Gmax is the maximum gradient or edgestrength [27]. As such,

1. Plain region: 0 ≤ G ≤ 0.06Gmax

2. Textural region: 0.06Gmax ≤ G ≤ 0.12Gmax

3. Edge region: 0.12Gmax ≤ GThis is incorporated into GLCM as shown in Figure 3. We are

only interested in information contained within the scope of the tex-tural region, denoted in gray. With this in mind, a textural-basedimage quality measure is derived. Vector S contains the sum of theelements of each diagonal. Assuming a pixel range of p = [0, 255],our texture measure, T is defined as

T =

∑255−15m=0

∑255n=m+15 C(m,n)−

∑255−31m=0

∑255n=m+31 C(m,n)∑255

m=0

∑255n=m C(m,n)

(2)T ascribes a ratio of an image’s textural content over its general

saliency. That is, the remaining area when the triangular borderedby p = 15 is subtracted by the triangular bordered by p = 31.

378

Page 4: GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT Zaid Omar …fusion.isif.org/proceedings/fusion12CD/html/pdf/051_31.pdf · 2014-10-02 · GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT

5. TEXTURAL-BASED FUSION METRIC

We aim to develop a performance assessment of image fusion meth-ods that is based on the textural measure. The Petrovic metric [6]is used as the template, with gradient strength, g and orientation, σreplaced with T .

The relative textural strength of an input image A with respectto the fused image F is obtained,

TAF =

{TATF

if TA ≤ TFTFTA

otherwise(3)

The relative measure is applied to generate the image’s texturepreservation value that relates explicitly to perceptual information inhumans [6],

QAF =Γ

1 + e−κ(TAF−σ) (4)

where Γ, κ and σ are constants that determine the exact shapeof the sigmoid function used to form QAF . The texture preservationvalue models the information transfer from A to F, and the percep-tual loss sustained. It represents how well the textural details arepreserved in the fused image. Hence the weighted fusion metric,QAB/F can be written as

QAB/F =QAFwA +QBFwB

wA + wB(5)

The weights wA and wB enable us to prioritise and reward theinput image with higher textural features. Theoretically the preser-vation values with high textural content should influence the met-ric more than those with relatively low textural content. We definew = [T ]λ where λ is a constant.

6. EXPERIMENT AND RESULTS

We tested the algorithm alongside other state of the art fusion metricson a number of image fusion examples. All of which were designedto mimic real world applications of surveillance, remote sensing andphotography. As with the UN example, we aim to preserve texturaldetails from the fusion process. Therefore the metric yields a highscore for fusion methods that contain minimal loss of textural infor-mation.

Two fusion methods were compared for their performance - in-dependent component analysis (ICA) [3] and Chebyshev polynomi-als (CP) [28]. ICA is a popular signal decomposition technique thatmaximises the independence or non-Gaussianity of multiple mixedvariables. It performs well in image fusion due to its intrinsic abilityto differentiate signal components and correctly fuse them. CP onthe other hand is developed mainly to address denoising in fusionapplications. Its smoothing property and low-pass filtering enablesenhanced performance under noisy conditions, though at the cost oflower signal accuracy. This drawback tends to be more evident innon-noisy scenarios where edges are indiscriminately smoothed out.

The GLCM representations of the input images and output werederived using d = 3, average φ and L = 255. The constants arevalued as follows [6]: Γ = 0.9994, κ = 15, σ = 0.5 and λ = 1.5for the proposed and Petrovic metrics respectively. The Piella andQu metric do not require external parameters. The results are as inTable 1.

The table displays scores for ICA and CP images according tothe different metrics. Overall, ICA clearly displays a superior perfor-mance over CP in terms of texture quality retention. A close obser-vation of Figure 4 confirms this. For the UN Camp, ICA was able to

Table 1: Texture metric score for ICA and CP results

Method Image ICA CP

Proposed metric

UN Camp 0.9874 0.4866Gun 0.8117 0.2962Clock 0.9273 0.9647Tank 0.9878 0.8609

Petrovic UN Camp 0.4991 0.3240Gun 0.5548 0.1172Clock 0.6599 0.4024Tank 0.6284 0.4040

Piella UN Camp 0.7477 0.4418Gun 0.7650 0.4129Clock 0.9225 0.6879Tank 0.9447 0.8530

Qu UN Camp 1.1087 1.4965Gun 0.8814 1.7132Clock 2.0762 2.1071Tank 1.6703 1.3840

retain almost all texture and scene densities whereas CP loses mostof its details and is visually poor. This gap in quality is well repre-sented by the scores in our proposed scheme. While also advocatingICA’s result, other metrics do not have as strong a separation be-tween the two results. A similar observation can also be made forthe Gun images. These two scenarios are examples of surveillancesystems that combine features from multiple modalities or sensorcameras. As such, the aim of localised detection of the target objectwhilst preserving the natural features and texture is desired in a fu-sion scheme. Our method has shown to be favourable towards thatend.

The multifocal examples of the Clock and Tank images are alsoobserved. These consist of near similar input images which use fu-sion to generate a more widely focused image and overcome theblurring effects. ICA was shown to be very effective in that regard,whereas the CP results are still blurry due to its filtering property.The ICA score was appropriately reflected in our proposed metric,which scored higher than other schemes. However the proposed met-ric also conveyed relatively high scores for CP. This may be due tothe lack of natural scenery, unlike the surveillance examples, andplain or monotonous regions which form the majority of the imagespace. Because of the low texture requirement, the advantage of thetextural metric is made slightly redundant.

7. CONCLUSION AND FUTURE WORK

The formulation of a novel image fusion assessment via texturalpreservation has been addressed in this paper. GLCM, a second or-der statistical method was used for the extraction of inter-pixel rela-tionships and to measure the degree of edges and texture within animage. From this, a measure of ‘texturedness’ was developed whichin turn facilitated the derivation of a new fusion metric based on thePetrovic measure. The proposed metric was assessed on images incomparison to other state of the art fusion metrics. Results concurthat our proposed method is viable and meaningful.

It is worth noting that a majority of image fusion algorithms areprimarily aimed at transferring edge details from input images intothe fused output. This is well matched by the numerous fusion as-sessment metrics that are heavily biased on measuring edge features.A possible direction of our research may therefore be to develop a

379

Page 5: GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT Zaid Omar …fusion.isif.org/proceedings/fusion12CD/html/pdf/051_31.pdf · 2014-10-02 · GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT

(a) (b) (c) (d)

(e) (f) (g) (h)

(i) (j) (k) (l)

(m) (n) (o) (p)

Fig. 4: Source images and fusion results a) UN Visual, b) UN NIR, c) UN Camp ICA, d) UN Camp CP, e) Gun Visual, f) Gun MMW, g) GunICA, h) Gun CP, i) Clock background focus, j) Clock foreground focus, k) Clock ICA, l) Clock CP, m) Tank field focus, n) Tank focus, o)Tank ICA and p) Tank CP

380

Page 6: GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT Zaid Omar …fusion.isif.org/proceedings/fusion12CD/html/pdf/051_31.pdf · 2014-10-02 · GLCM-BASED METRIC FOR IMAGE FUSION ASSESSMENT

textural-based image fusion algorithm to match the proposed metric.

8. REFERENCES

[1] T. Stathaki, “Image Fusion: Algorithms and Applications”,Edited Book, Academic Press, 2008.

[2] Y. Zhang, “Understanding image fusion”, Photogrammetric En-gineering & Remote Sensing, pp. 657-661, 2004.

[3] N. Mitianoudis and T. Stathaki, “Pixel-based and region-basedimage fusion schemes using ICA bases”, Information Fusion,Vol. 8, No. 2, pp. 131142, 2007.

[4] A. Toet, N. Schoumans and J.K. Uspeert, “’Perceptual evalua-tion of different nighttime imaging modalities’, Proceedings ofthe 3rd International Conference on Information Fusion, 3, pp.TuD3.17-TuD3.23, 2000.

[5] G. Qu, D. Zhang and P. Yan, “Information Measure for Perfor-mance of Image Fusion”, Electronics Letters 38 (2002) (7), pp.313 315, 2002.

[6] C.S. Xydeas and V. Petrovic, “Objective image fusion perfor-mance measure”, Electronics Letters, Vol. 36, no. 4, pp. 308-309, 2000.

[7] G. Piella, H. Heijmans, “A New Quality Metric for Image Fu-sion”, Proceedings International Conference on ComputationalVision ICCV03, Vol. 3, pp. 173-176, 2005.

[8] Z. Wang and A.C. Bovik,“A universal image quality index”,IEEE Signal Processing Letters, Vol. 9, No. 3, pp. 81-84, March2002.

[9] T. Scheermesser and O. Bryngdahl, “Texture metric of halftoneimages”, J. Opt. Soc. Am., Vo. 13, No. 1, pp. 18-24, 1996.

[10] Yossi Rubner, Carlo Tomasi. “Texture Metrics”, Proceeding ofthe IEEE International Conference on Systems, Man, and Cy-bernetics, pp. 4601-4607, 1998.

[11] M. Petrou and P.D. Sevilla, “Image processing: Dealing withTexture”, Book, Wiley, page 618, 2006.

[12] M. Tuceryan and A.K. Jain, “Texture analysis”, The Hand-book of Pattern Recognition and Computer Vision (2nd Edition),World Scientific Publishing Co., pp. 207-248, 1998.

[13] N. Damera-VEnkata, T.D. Kite, W.S. Geisler, B.L. Evans andA.C. Bovik, “Image quality assessment based on a degradationmodel”, IEEE Transactions on Image Processing, Vol. 9, No. 4,pp. 636-650, 2000.

[14] R.S. Blum, Z. Xue and Z. Zhang, “An overview of image fu-sion”. In R.S. Blum and Z. Liu (Eds.), Multi Sensor Image Fu-sion And Its Applications, CRC Press, pp. 1-35, 2006.

[15] N. Cvejic, A. Loza, D. Bull and N. Canagarajah, “A novel met-ric for performance evaluation of image fusion algorithms”, Pro-ceedings of World Academy of Science, Engineering and Tech-nology, Vol. 7, pp. 80-85, 2005. s

[16] C. Ramesh and T. Ranjith, “Fusion performance measures anda lifting wavelet transform based algorithm for image fusion”,International Conference on Information Fusion, pp. 317-320,2002.

[17] Y. Chen and R.S. Blum, “A new automated quality assessmentalgorithm for night vision image fusion”, Conference on Infor-mation Sciences and Systems, pp. 518-523, 2007.

[18] N. Cvejic, D.R. Bull and N. Canagarajah, “Metric for multi-modal image sensor fusion”, Electronics Letters, Vol. 43, No. 2,2007.

[19] Q. Wang, Y. Shen, Y. Zhang and J.Q. Zhang, “A quantitativemethod for evaluating the performances of hyperspectral imagefusion”, IEEE Transactions on Instrumentation and Measure-ment, Vol. 52, No. 4, pp. 1041-1047, 2003.

[20] Q. Wang and Y. Shen, “Performances evaluation of image fu-sion techniques based on nonlinear correlation measurement”,in Proc. of IEEE Instrumentation and Measurement TechnologyConference, Vol. 1, pp. 472-475, 2004.

[21] D. Gadkari, “Image quality analysis using GLCM”, MSc The-sis, University of Central Florida, 2004.

[22] Y. hu, C.X. Zhao and H.N. Wang, “Directional analysis oftexture images using gray level co-occurrence matrix”, IEEEPacific-Asia Workshop on Computational Intelligence and In-dustrial Application, pp. 277-281, 2008.

[23] R.M. Haralick, K. Shanmugam and I. Dinstein, “Textural fea-tures for image classification”, IEEE Transactions on Systems,Man, and Cybernetics, Vol. SMC-3, No. 6, pp. 610-621, 1973.

[24] M. Hall-Beyer, http://www.fp.ucalgary.ca/mhallbey/the_glcm.htm, Accessed on 9/12/11.

[25] M.M. Mokji and S.A.R. Abu Bakar, “Adaptive thresholdingbased on co-occurrence matrix edge information”, Proceedingsof the First Asia International Conference on Modelling & Sim-ulation, 2007.

[26] X. Ran and N. Farvardin, “A perceptually motivated three-component image model-Part I: description of the model”, IEEETransactions on Image Processing, Vol. 4, No. 4, pp. 401-414,1995.

[27] C. Li and A.C. Bovik, “Content-partitioned structural similar-ity index for image quality assessment”, Signal Processing: Im-age Communication 25, pp. 517-526, 2010.

[28] Z. Omar, N. Mitianoudis and T. Stathaki, “Two-dimensionalChebyshev polynomials for image fusion”, 28th Picture CodingSymposium, Nagoya, December 2010.

381