mri and ct image indexing and retrieval using local mesh peak valley edge patterns

10
MRI and CT image indexing and retrieval using local mesh peak valley edge patterns Subrahmanyam Murala, Q.M. Jonathan Wu n Department of Electrical and Computer Engineering, University of Windsor, Windsor, ON, Canada article info Article history: Received 17 June 2013 Received in revised form 9 September 2013 Accepted 7 December 2013 Available online 18 December 2013 Keywords: Medical imaging Image retrieval Patterns Texture Local binary patterns (LBP) abstract In this paper, a new pattern based feature, local mesh peak valley edge pattern (LMePVEP) is proposed for biomedical image indexing and retrieval. The standard LBP extracts the gray scale relationship between the center pixel and its surrounding neighbors in an image. Whereas the proposed method extracts the gray scale relationship among the neighbors for a given center pixel in an image. The relations among the neighbors are peak/valley edges which are obtained by performing the first-order derivative. The performance of the proposed method (LMePVEP) is tested by conducting two experiments on two benchmark biomedical databases. Further, it is mentioned that the databases used for experiments are OASIS MRI database which is the magnetic resonance imaging (MRI) database and VIA/IELCAP-CT database which includes region of interest computer tomography (CT) images. The results after being investigated show a significant improve- ment in terms average retrieval precision (ARP) and average retrieval rate (ARR) as compared to LBP and LBP variant features. & 2014 Elsevier B.V. All rights reserved. 1. Introduction Day by day the importance of medical image is increasing in medical hospitals for patient diagnosis. This image data exists in different formats such as computer tomography (CT), magnetic resonance imaging (MRI), and ultrasound (US), X-ray. However, one cannot make use of this data unless it is organized to allow efficient access, search and retrieval. To address this problem, content based biomedical image retrie- val came into existence. The content based image retrieval (CBIR) utilizes visual contents of an image such as color, texture, shape, faces, and spatial layout to represent and index the image database. The previously available image retrieval systems are presented in [16]. The feature extraction in CBIR is a prominent step whose effectiveness depends upon the method adopted for extracting features from given images. The visual content descriptors are either global or local. A global descriptor represents the visual features of the whole image, whereas a local descriptor represents the visual features of regions or objects to describe the image. These are arranged as multi- dimensional feature vectors and construct the feature data- base. For similarity distance measurement many methods have been developed like Euclidean distance, L 1 distance, etc. Selection of feature descriptors and similarity distance mea- sures affect retrieval performances of an image retrieval system significantly. In early year 0 s intensity histogram based features are used for biomedical image retrieval [7]. However, their retrieval performance is usually limited especially on large databases due to lack of discrimination power of such descriptors. To address this problem, texture based are proposed for biomedical image retrieval. Cai et al. [8] have used the physiological kinetic feature which reduces the image storage memory for positron emission tomography (PET) image retrieval. Scott and Shyu have designed the biomedical media retrieval system [9], where they utilize Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/image Signal Processing: Image Communication 0923-5965/$ - see front matter & 2014 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.image.2013.12.002 n Corresponding author. Tel.: þ1 519 253 3000x2580. E-mail addresses: [email protected], [email protected] (S. Murala), [email protected] (Q.M. Jonathan Wu). Signal Processing: Image Communication 29 (2014) 400409

Upload: qm

Post on 25-Dec-2016

213 views

Category:

Documents


1 download

TRANSCRIPT

Contents lists available at ScienceDirect

Signal Processing: Image Communication

Signal Processing: Image Communication 29 (2014) 400–409

0923-59http://d

n CorrE-m

muralasjwu@uw

journal homepage: www.elsevier.com/locate/image

MRI and CT image indexing and retrieval using local meshpeak valley edge patterns

Subrahmanyam Murala, Q.M. Jonathan Wu n

Department of Electrical and Computer Engineering, University of Windsor, Windsor, ON, Canada

a r t i c l e i n f o

Article history:Received 17 June 2013Received in revised form9 September 2013Accepted 7 December 2013Available online 18 December 2013

Keywords:Medical imagingImage retrievalPatternsTextureLocal binary patterns (LBP)

65/$ - see front matter & 2014 Elsevier B.V.x.doi.org/10.1016/j.image.2013.12.002

esponding author. Tel.: þ1 519 253 3000x2ail addresses: [email protected],[email protected] (S. Murala),indsor.ca (Q.M. Jonathan Wu).

a b s t r a c t

In this paper, a new pattern based feature, local mesh peak valley edge pattern (LMePVEP)is proposed for biomedical image indexing and retrieval. The standard LBP extracts thegray scale relationship between the center pixel and its surrounding neighbors in animage. Whereas the proposed method extracts the gray scale relationship among theneighbors for a given center pixel in an image. The relations among the neighbors arepeak/valley edges which are obtained by performing the first-order derivative. Theperformance of the proposed method (LMePVEP) is tested by conducting two experimentson two benchmark biomedical databases. Further, it is mentioned that the databases usedfor experiments are OASIS�MRI database which is the magnetic resonance imaging (MRI)database and VIA/I–ELCAP-CT database which includes region of interest computertomography (CT) images. The results after being investigated show a significant improve-ment in terms average retrieval precision (ARP) and average retrieval rate (ARR) ascompared to LBP and LBP variant features.

& 2014 Elsevier B.V. All rights reserved.

1. Introduction

Day by day the importance of medical image is increasingin medical hospitals for patient diagnosis. This image dataexists in different formats such as computer tomography (CT),magnetic resonance imaging (MRI), and ultrasound (US),X-ray. However, one cannot make use of this data unless itis organized to allow efficient access, search and retrieval. Toaddress this problem, content based biomedical image retrie-val came into existence. The content based image retrieval(CBIR) utilizes visual contents of an image such as color,texture, shape, faces, and spatial layout to represent and indexthe image database. The previously available image retrievalsystems are presented in [1–6].

The feature extraction in CBIR is a prominent stepwhose effectiveness depends upon the method adopted

All rights reserved.

580.

for extracting features from given images. The visual contentdescriptors are either global or local. A global descriptorrepresents the visual features of the whole image, whereas alocal descriptor represents the visual features of regions orobjects to describe the image. These are arranged as multi-dimensional feature vectors and construct the feature data-base. For similarity distance measurement many methodshave been developed like Euclidean distance, L1 distance, etc.Selection of feature descriptors and similarity distance mea-sures affect retrieval performances of an image retrievalsystem significantly.

In early year0s intensity histogram based features areused for biomedical image retrieval [7]. However, theirretrieval performance is usually limited especially on largedatabases due to lack of discrimination power of suchdescriptors. To address this problem, texture based areproposed for biomedical image retrieval. Cai et al. [8] haveused the physiological kinetic feature which reduces theimage storage memory for positron emission tomography(PET) image retrieval. Scott and Shyu have designed thebiomedical media retrieval system [9], where they utilize

S. Murala, Q.M. Jonathan Wu / Signal Processing: Image Communication 29 (2014) 400–409 401

the entropy balanced statistical (EBS) k-d tree for featureextraction. The index utilizes statistical properties inherentin large-scale biomedical media databases for efficient andaccurate searches. Rahman et al. [10] have designed therelevance feedback based biomedical image retrieval sys-tem. They have proposed the query-specific adaptivelinear combination of similarity matching approach byrelying on the image classification and feedback informa-tion from users. Nakayama et al. [11] investigated fourobjective similarity measures as an image retrieval tool forselecting lesions similar to unknown lesions on mammo-grams. Classification of benign and malignant breastmasses based on shape and texture features in sonographyimages is proposed in [12]. The mass regions wereextracted from the region of interest (ROI) sub-image byimplementing hybrid segmentation approach based onlevel set algorithms. In [13] a boosting framework forvisuality-preserving distance metric learning is proposedfor medical image retrieval. The mammographic imagesand dataset from ImageCLEF are used for performanceevaluation. Quellec et al. [14] proposed the optimizedwavelet transform for medical image retrieval by adaptingthe wavelet basis, within the lifting scheme framework forwavelet decomposition. The weights are assigned betweenwavelet sub-bands. They used the diabetic retinopathy andmammographic databases for medical image retrieval. Thewavelet transform based brain image retrieval is presentedin [15]. The co-occurrence matrix based retrieval of med-ical CT and MRI images in different tissues is can be seen in[16]. Further, the image retrieval of different body parts isproposed in [17] which employ color quantization andwavelet transform.

However, the features, k–d tree [9], co-occurrence matrix[16], etc., are computationally more expansive. To addressthis computational complexity, the local binary pattern (LBP)[18] is proposed. The LBP operator was introduced by Ojalaet al. [18] for texture classification. Success in terms of speed(no need to tune any parameters) and performance isreported in many research areas such as texture classification

Fig. 1. Calculation of LB

[18–22], face recognition [23–25], object tracking [26,27],image retrieval [28–36] and interest point detection [37].Peng et al. proposed the texture feature extraction based on auniformity estimation method in brightness and structure inchest CT images [32]. They used the extended rotationalinvariant LBP and the gradient orientation difference torepresent brightness and structure in the image. Unay et al.proposed the local structure-based region-of-interest retrie-val in brain MR images [33]. Quantitative analysis of pul-monary emphysema using LBP is presented in [34]. Theyimproved the quantitative measures of emphysema in CTimages of the lungs using joint LBP and intensity histograms.Li and Meng have proposed the automatic recognition oftumor for wireless capsule endoscopy (WCE) images [38].The candidate color texture feature that integrates uniformLBP and wavelet is proposed to characterize WCE images.Further, the detection of bleeding regions for capsule endo-scopy images using LBP is presented in [39]. Facial paralysisvideo retrieval system using LBP is proposed in [40]. Thesymmetry of facial movements is measured by the resistor-average distance (RAD) between LBP features extracted fromthe two sides of the face. Support vector machine is appliedto provide quantitative evaluation of facial paralysis.

The main contributions of proposed method are givenas follows. The proposed method collects the relationshipamong the neighbors for a given center pixel whereas theLBP extracts the relationship between the center pixel andits neighbors in an image. The collecting relations amongthe neighbors are peak or valley edges which are obtainedby first-order derivatives among the neighbors. The per-formance of the proposed method is tested by conductingtwo experiments on two different biomedical databases.

The organization of the paper is as follows: In Section 1,a brief review of biomedical image retrieval and relatedwork are given. Section 2 presents a concise review of localpatterns. Section 3 presents the concept of proposedsystem framework and the query matching. Experimentalresults and discussions are presented in Section 4 andlastly in Section 5, we conclude with the summary of work.

P and LTP operators.

Fig. 2. The LBP and the first three LMeP calculations for a given (P, R).

S. Murala, Q.M. Jonathan Wu / Signal Processing: Image Communication 29 (2014) 400–409402

2. Review of local patterns

2.1. Local binary patterns (LBPs)

The LBP operator was introduced by Ojala et al. [18] fortexture classification. Given a center pixel in the image,LBP value is computed by comparing its gray value with itsneighbors as shown in Fig. 1 based on the followingequation:

LBPP;R ¼ ∑P

i ¼ 12ði�1Þ � f 1ðIðgiÞ� IðgcÞÞ ð1Þ

f 1ðxÞ ¼1 xZ00 else

�ð2Þ

where IðgcÞ is the gray value of the center pixel, IðgiÞ is thegray value of its neighbors, P is the number of neighborsand R is the radius of the neighborhood.

2.2. Local ternary patterns (LTPs)

Tan and Triggs [25] extended the LBP to three valuedcode called LTP, in which gray values in the zone of width7th around IðgcÞ are quantized to zero, those above (I(gc)þth) are quantized to þ1 and those below (I(gc)�th)are quantized to �1, i.e. the indicator f 1ðxÞ is replaced with3-valued function and binary LBP code is replaced by aternary LTP code as shown in Fig. 1:

~f 1ðx; IðgcÞ; tÞ ¼1; xZ IðgcÞþth0; jx� IðgcÞjoth2; xr IðgcÞ�th

8><>:

�������x ¼ IðgpÞ

ð3Þ

More details about LTP can be found in [25].

2.3. Local derivative patterns (LDPs)

Zhang et al. proposed the local derivative patterns(LDP) for face recognition [24]. They considered LBP asthe non-directional first-order local pattern operator andextended it to higher-orders (nth-order) called LDP. LDP

contains more detailed discriminative features as com-pared to LBP.

To calculate the nth-order LDP, the (n�1)th-orderderivatives are calculated along 01, 451, 901 and 1351directions, denoted as Iðn�1Þ

α ðgcÞjα ¼ 03 ;453 ;903 ;1353 . Finallynth-order LDP is calculated as follows:

LDPnαðgcÞ ¼ ∑P

p ¼ 12ðp�1Þ � f 2ðIðn�1Þ

α ðgcÞ; Iðn�1Þα ðgpÞÞjP ¼ 8 ð4Þ

f 2ðx; yÞ ¼1 if xUy r00 else

�ð5Þ

The detailed discussion about LDP is available in [24].

2.4. Local mesh peak valley edge patterns (LMePVEPs)

The ideas of the LBP [18] and peak valley edge patterns(PVEP) [36] have been motivated us to propose theLMePVEP for biomedical image retrieval. The mesh pat-terns (LMeP) are computed based on the relationshipamong the surrounding neighbors for a given center pixelin an image. Fig. 2 illustrates the possible mesh patternswith P neighbors in R distance (radius) from a givencenter pixel.

The local pattern is coded to peak pattern when twodirections are approaching the center and valley patternwhen two directions are leaving from the center as clearlyshown in Fig. 3.

For a given center pixel in an image, the LMePVEP valueis computed based on the relationship among the sur-rounding neighbors using forward and backward first-order derivatives (see Fig. 4).

The forward and backward first-order derivativesamong the neighbors for a given center pixel are calcu-lated as follows.

The forward first-order derivative among P neighborsfor a given center pixel I(gc):

I!j

P;Rðgc; giÞ ¼ Iðgα1 Þ� IðgiÞ; i¼ 1;2;…; P

α1 ¼ 1þ modððiþPþ j�1Þ; PÞ;

Fig. 3. The bit calculation of Peak and Valley patterns for j¼1.

Fig. 4. Example for LMePVEP calculation for given center pixel in an image.

Fig. 5. Feature maps of LBP, LMePEP and LMeVEP on sample image from OASIS-MRI database.

S. Murala, Q.M. Jonathan Wu / Signal Processing: Image Communication 29 (2014) 400–409 403

Fig. 6. Proposed retrieval system framework.

Fig. 7. (a) Sample image with white background, (b) Image with the Salt and Pepper noise of density is 0.01, (c) Image with the Salt and Pepper noise ofdensity is 0.5 and (d) Image with the Salt and Pepper noise of density is 1.

Table 1MRI data acquisition details [41].

Sequence MP-RAGE

TR (ms) 9.7TE (ms) 4.0Flip angle (deg.) 10TI (ms) 20TD (ms) 200Orientation SagittalThickness, gap (mm) 1.25, 0Resolution (pixels) 176�208

S. Murala, Q.M. Jonathan Wu / Signal Processing: Image Communication 29 (2014) 400–409404

8 j¼ 1;2;…; ðP=2Þ ð6Þ

where j is the distance for first-order derivation.The backward first-order derivative among P neighbors

for a given center pixel I(gc):

I j

P;Rðgc; giÞ ¼IðgðPþ i� jÞÞ� IðgiÞ if jZ iIðgði� jÞÞ� IðgiÞ else

(ð7Þ

The LMePVEP are defined as follows:

LMePVEPjP;R ¼

f 3ð I!j

P;Rðgc; g1Þ; I j

P;Rðgc; g1ÞÞ;

f 3ð I!j

P;Rðgc; g2Þ; I j

P;Rðgc; g2ÞÞ;:::::::::::::::::::::::::::::::::::::::::;

f 3ð I!j

P;Rðgc; gPÞ; I j

P;Rðgc; gPÞÞ

266666664

377777775

ð8Þ

f 3ðx; yÞ ¼1 if x40 and y402 if xo0 and yo00 else

8><>: ð9Þ

LMePVEP is the ternary pattern (0, 1, 2) which is furtherconverted into two binary patterns i.e. local mesh peakedge pattern (LMePEP) and local mesh valley edge pattern(LMeVEP). The detailed representation of these two pat-terns is shown in Fig. 4.

Eventually, the given image is converted to LMePEP andLMeVEP images having values ranging from 0 to 255.

After calculation of patterns, PTN (LBP, LTP, LDP, LMe-PEP and LMeVEP), the whole image is represented by

building a histogram supported by the following equation:

HhistðlÞ ¼ ∑N1

i ¼ 1∑N2

k ¼ 1f 4ðPTNði; kÞj; lÞ; lA ½0;255� ð10Þ

f 4ðx; yÞ ¼1 x¼ y

0 else

�ð11Þ

where N1�N2 represents the size of image.Fig. 4 illustrates the example for LMePVEP calculation

and then segregation them into LMePEP and LMeVEP for agiven center pixel in an image.

Fig. 5 illustrates the evaluation of different feature maps onsample image selected from OASIS-MRI database. The sampleimage is chosen as it provides the results which are visiblycomprehensible to differentiate the effectiveness of theseapproaches. From Fig. 5, it is observed that the LMePVEPyields more edge information as compared to LBP. Theexperimental results demonstrate that the proposed LMePVEP

S. Murala, Q.M. Jonathan Wu / Signal Processing: Image Communication 29 (2014) 400–409 405

shows better performance as compared to LBP, indicating thatit can capture more edge information than LBP for textureextraction.

3. Feature extraction

3.1. Proposed system framework

Fig. 6 shows the flow chart of the proposed imageretrieval system and algorithm for the same is given below:

Algorithm. Input: Image; Output: Retrieval result

1.

Load the image. 2. Perform the forward and backward first-order deriva-

tives among the neighbors for a given center pixel.

3. Calculate the ternary LMePVEP. 4. Separate the ternary LMePVEP into binary LMePEP and

LMeVEP.

5. Construct the histogram. 6. Construct the feature vector by concatenating histograms. 7. Compare the query image with the image in the

database using Eq. (15).

8.

Fig. 9. (a) Comparison of proposed method with other existing methodsas function of number of top matches and (b) group wise comparison ofproposed method with other existing methods on OASIS-MRI database.

Table 2Performance of the LMePVEP with various distance metrics in terms ofARP (%) on OASIS-MRI database.

Method Distance metric

L1 Euclidean Canberra d1

LMePVEP 48.92 47.26 49.20 50.83

Retrieve the images based on the best matches.

3.2. Query matching

Feature vector for query image Q is represented asf Q ¼ ðf Q1

; f Q2;…; f QLg

Þ obtained after the feature extraction.

Similarly, each image in the database is represented withfeature vector f DBi

¼ ðf DBi1 ; f DBi2 ;…; f DBiLgÞ; 8 i¼ 1;2;…; jDBj.

The goal is to select n best images that resemble the queryimage. This involves selection of n top matched images bymeasuring the distance between query image and imagesin the database |DB|.

In this paper, four types of similarity distance metricsare used and these are shown below.

L1 or Manhattan distance measure:

DðQ ;DBÞ ¼ ∑Lg

i ¼ 1jf DBji� f Q ;ij ð12Þ

Euclidean distance measure:

DðQ ;DBÞ ¼ ∑Lg

i ¼ 1ðf DBji� f Q ;iÞ2

!1=2

ð13Þ

Fig. 8. Sample images from OASIS da

Canberra distance measure:

DðQ ;DBÞ ¼ ∑Lg

i ¼ 1

jf DBji� f Q ;ij

jf DBjijþjf Q ;ij

ð14Þ

tabase (one image per category).

Fig. 10. Query results of LMePVEP on OASIS-MRI database.

Table 3Data acquisition details of VIA/I-ELCAP-CT lung image database.

Data No. of slices Resolution In-plane resolution Slice thickness (mm) Tube voltage (kV)

W1-10 100 512�512 0.76�0.76 1.25 120

S. Murala, Q.M. Jonathan Wu / Signal Processing: Image Communication 29 (2014) 400–409406

d1 distance measure:

DðQ ;DBÞ ¼ ∑Lg

i ¼ 1

f DBji� f Q ;i

1þ f DBjiþ f Q ;i

���������� ð15Þ

where f DBjiis ith feature of jth image in the database |DB|.

Fig. 7 illustrates the sample images with the differentSalt and Pepper noises. We analyze the performance of theproposed method and LBP in Fig. 7(b)–(d). For the analysis,we calculated the absolute mean of differences and thevariance of differences. The absolute mean of differencesand variance of differences with LBP (LMePVEP) in Fig. 7(b)–(d) are 0.0115 (0.0093), 0.3822 (0.3181) and 0.5020(0.4188), respectively. From these observations, it is clearthat the proposed method is more robust to the noiseconditions as compared to the LBP.

3.3. Advantages of the LMePVEP over other patterns

The advantages of the LMePVEP over the LBP, LTP andLTCoP are given as follows:

(1)

The existing LBP, LTP and LTCoP collect the relationshipbetween the center pixel and its surrounding neigh-bors, whereas LMePVEP collects the relationshipamong the neighbors for a given center pixel. Hence,the proposed LMePVEP is more robust to the noiseconditions (see in Fig. 7).

(2)

The PVEP encodes the spatial relation between the pairof neighbors in a local region of P neighbors for a givencenter pixel, while LBP extracts relation between thecenter pixel and its neighbors. Therefore, PVEP cap-tures more spatial information as compared to LBP.It has been already proved in [36].

4. Experimental results and discussions

In order to analyze the performance of our algorithmfor biomedical image retrieval, two experiments wereconducted on two different medical databases. Resultsobtained are discussed in the following subsections.

In all experiments, each image in the database is usedas the query image. For each query, the system collects n

Fig. 12. Comparison of the LMePVEP with other existing methods interms of: (a) ARP and (b) ARR on VIA/I-ELCAP-CT database.

S. Murala, Q.M. Jonathan Wu / Signal Processing: Image Communication 29 (2014) 400–409 407

database images X ¼ ðx1; x2;…; xnÞ; with the shortest imagematching distance is given by Eq. (12). If xi; i¼ 1;2;…;nbelong to the same category of the query image, we saythe system has correctly matched the desired.

The average retrieval precision (ARP) and averageretrieval rate (ARR) judge the performance of the proposedmethod those are calculated by Eqs. (16)–(19).

For the query image Iq, the precision (P) and recall (R)are defined as follows:

precision : PðIqÞ ¼ number of relevant images retrievedtotal number of images retrieved

ð16Þ

ARP¼ 1jDBj ∑

jDBj

i ¼ 1PðIiÞ

�����nr10

ð17Þ

recall : RðIqÞ ¼number of relevant images retrieved

total number of relevant images in the databaseð18Þ

ARR¼ 1jDBj ∑

jDBj

i ¼ 1RðIiÞ

�����nZ10

ð19Þ

4.1. Experiment #1

The Open Access Series of Imaging Studies (OASIS) [41]is a series of magnetic resonance imaging (MRI) datasetthat is publicly available for study and analysis. Thisdataset consists of a cross-sectional collection of 421subjects aged between 18 to 96 years. The MRI acquisitiondetails are given in Table 1.

For image retrieval purpose we grouped these 421images into four categories (124, 102, 89, and 106 images)based on the shape of ventricular in the images. Fig. 8depicts the sample images of OASIS database (one imagefrom each category).

Fig. 9(a) shows the graphs depicting the retrievalperformance of the proposed method and other existingmethods as function of number of top matches. Fig. 9(b)illustrates the category wise performance of various meth-ods in terms of ARP on OASI-MRI image database. FromFig. 9, it is evident that the proposed method outperformsthe other existing methods in terms of ARP on OASIS-MRI

Fig. 11. Sample images from VIA

database. Table 2 illustrates the performance of proposedmethod with various distance metrics in terms of ARP onOASIA-MRI database. From Table 2, it is clear that the d1distance metric shows a better performance as comparedto other distance metrics in terms of ARP on OASIS-MRIdatabase. Fig. 10 illustrates query results of proposed methodby considering ten top matches.

4.2. Experiment #2

Vision and image analysis (VIA) group and internationalearly lung cancer action program (I-ELCAP) created a

/I-ELCAP-CT image database.

Fig. 13. Group wise performance of LMePVEP and other existing methodsin terms of: (a) ARP and (b) ARR on VIA/I-ELCAP-CT database.

Table 4Performance of the LMePVEP with various distance metrics in terms ofARP and ARR on VIA/I-ELCAP-CT database.

Method Performance Distance metric

L1 Euclidean Canberra d1

LMePVEP ARP (%) 85.25 81.25 86.29 87.67ARR (%) 55.36 54.26 56.12 57.00

Table 5Feature vector length of query image using var-ious methods.

Method Feature vector length

LBP 256LTP 2�256LDP 4�256LTCoP 2�256PVEP 4�512LMePVEP 4�256

S. Murala, Q.M. Jonathan Wu / Signal Processing: Image Communication 29 (2014) 400–409408

computer tomography (CT) dataset [42] for performanceevaluation of different computer aided detection systems.These images are in DICOM (digital imaging and commu-nications in medicine) format. The CT scans were obtainedin a single breath hold with a 1.25 mm slice thickness. Thelocations of nodules detected by the radiologist are alsoprovided. The CT scan data acquisition details are given inTable 3. For experiments we have selected 10 scans. Eachscan has 100 images with resolution 512�512. Further,ROIs were annotated manually to construct the ROI CTimage database. Fig. 11 depicts the sample images of VIA/I-ELCAP database (one image from each category).

Fig. 12 illustrates the retrieval performance of theproposed method (LMePVEP) and other existing methods(LBP, LTP, LDP, PVEP and LTCoP) in terms of ARP and ARR.Fig. 13 illustrates the individual group performances of theLMePVEP and other existing methods in terms of ARP andARR. From Figs. 12 and 13, it is clear that the LMePVEPoutperforms other existing methods in terms of ARP andARR on VIA/I-ELCAP-CT database. Table 4 illustrates theperformance of proposed method with various distancemetrics in terms of ARP and ARR on VIA/I-ELCAP-CTdatabase. From Table 4, it is clear that the d1 distancemetric shows a better performance as compared to other

distance metrics in terms of ARP and ARR on VIA/I-ELCAP-CT database.

4.3. Feature vector length V/S performance

Table 5 shows the feature vector length for a givenquery image using LBP, LTP, LDP, LTCoP, PVEP and LMeP-VEP. The experimentation is carried out on MATLAB 7.6software. From the Table 5, it is clear that the featurevector length of LMePVEP is four times more as comparedto LBP and is outperforming the LBP and other existingmethods in terms of ARP and ARR on two differentbiomedical databases.

5. Conclusions

A novel pattern based image indexing and retrievalalgorithm, local mesh peak valley edge patterns (LMeP-VEP) is proposed in this paper. The LMePVEP extracts therelationship among the neighbors for a given center pixelin an image using forward and backward first-orderderivatives. The effectiveness of the proposed algorithmis tested on two benchmark databases. The results afterbeing investigated show a significant improvement interms of ARP and ARR as compared to LBP and LBP variantfeatures on OASIS-MEI and VIA/I-ELCAP-CT database.

Acknowledgment

This work was supported by the Canada Research Chairprogram, the Natural Sciences and Engineering ResearchCouncil of Canada (NSERC) Discovery Grant.

References

[1] Giuliano Pasqualotto, Pietro Zanuttigh, Guido M. Cortelazzo, Com-bining color and shape descriptors for 3D model retrieval, SignalProcess. Image Commun. 28 (2013) 608–623.

[2] Michalis Lazaridis, Apostolos Axenopoulos, Dimitrios Rafailidis,Petros Daras, Multimedia search and retrieval using multimodalannotation propagation and indexing techniques, Signal Process.Image Commun. 28 (2013) 351–367.

[3] An Vo Soontorn Oraintara, A study of relative phase in complexwavelet domain: property, statistics and applications in textureimage retrieval and segmentation, Signal Process. Image Commun.25 (2010) 28–46.

[4] Gianluca Francini, Skjalg Lepsøy, Massimo Balestri, Selection of localfeatures for visual search, Signal Process. Image Commun. 28 (2013)311–322.

S. Murala, Q.M. Jonathan Wu / Signal Processing: Image Communication 29 (2014) 400–409 409

[5] Sihyoung Lee, Wesley De Neve, Yong Man Ro, Tag refinement in animage folksonomy using visual similarity and tag co-occurrencestatistics, Signal Process. Image Commun. 25 (2010) 761–773.

[6] Yue Gao, Qionghai Dai, Meng Wang, Naiyao Zhang, 3D modelretrieval using weighted bipartite graph matching, Signal Process.Image Commun. 26 (2011) 39–47.

[7] K.N. Manjunath, A. Renuka, U.C. Niranjan, Linear models of cumu-lative distribution function for content-based medical image retrie-val, J. Med. Syst. 31 (2007) 433–443.

[8] Weidong Cai, David Dagan Feng, Roger Fulton, Content-basedretrieval of dynamic PET functional images, IEEE Trans. Inf. Technol.Biomed. 4 (2) (2000) 152–158.

[9] Grant Scott Chi-Ren Shyu, Knowledge-driven multidimensionalindexing structure for biomedical media database retrieval, IEEETrans. Inf. Technol. Biomed. 11 (3) (2007) 320–331.

[10] Md Mahmudur Rahman, Sameer K. Antani, George R. Thoma, Alearning-based similarity fusion and filtering approach for biome-dical image retrieval using SVM classification and relevance feed-back, IEEE Trans. Inf. Technol. Biomed. 15 (4) (2011) 640–646.

[11] R. Nakayama, H. Abe, J. Shiraishi, K. Doil, Evaluation of objectivesimilarity measures for selecting similar images of mammographiclesions, J. Digital Imaging 24 (1) (2011) 75–85.

[12] Fahimeh Sadat Zakeri, Hamid Behnam, Nasrin Ahmadinejad, Classi-fication of benign and malignant breast masses based on shape andtexture features in sonography images, J. Med. Syst. 36 (3) (2012)1621–1627.

[13] Liu Yang, Jin Rong, Lily Mummert, Rahul Sukthankar, Adam Goode,Bin Zheng, Steven C.H. Hoi, Mahadev Satyanarayanan, A boostingframework for visuality-preserving distance metric learning and itsapplication to medical image retrieval, IEEE Trans. Pattern Anal.Mach. Intell. 32 (1) (2010) 33–44.

[14] G. Quellec, M. Lamard, G. Cazuguel, B. Cochener, C. Roux, Waveletoptimization for content-based image retrieval in medical data-bases, J. Med. Image Anal. 14 (2010) 227–241.

[15] A. Traina, C. Castanon, C. Traina Jr., Multiwavemed: a system formedical image retrieval through wavelets transformations, in:Proceedings of the 16th IEEE Symposium on Computer-BasedMedical Systems, New York, USA, 2003, pp. 150–155.

[16] J.C. Felipe, A.J.M. Traina, C. Traina Jr., Retrieval by content of medicalimages using texture for tissue identification, in: Proceedings of the16th IEEE Symposium on Computer-Based Medical Systems, NewYork, USA, 2003, pp. 175–180.

[17] H. Muller, A. Rosset, J.-P. Vall´et, A. Geisbuhler, Comparing featuresets for content-based image retrieval in a medical case database, in:Proceedings of the SPIE Medical Imaging, PACS Image Information,San Diego, USA, 2004, pp. 99–109.

[18] T. Ojala, M. Pietikainen, D. Harwood, A comparative study of texturemeasures with classification based on feature distributions, PatternRecognition 29 (1) (1996) 51–59.

[19] T. Ojala, M. Pietikainen, T. Maenpaa, Multiresolution gray-scale androtation invariant texture classification with local binary patterns,IEEE Trans. Pattern Anal. Mach. Intell. 24 (7) (2002) 971–987.

[20] Z. Guo, L. Zhang, D. Zhang, Rotation invariant texture classificationusing LBP variance with global matching, Pattern Recognition 43(2010) 706–716.

[21] S. Liao, Max W.K. Law, Albert C.S. Chung, Dominant local binarypatterns for texture classification, IEEE Trans. Image Process. 18 (5)(2009) 1107–1118.

[22] Zhenhua Guo, D. Zhang, David Zhang, A completed modeling of localbinary pattern operator for texture classification, IEEE Trans. ImageProcess. 19 (6) (2010) 1657–1663.

[23] T. Ahonen, A. Hadid, M. Pietikainen, Face description with localbinary patterns: applications to face recognition, IEEE Trans. PatternAnal. Mach. Intell. 28 (12) (2006) 2037–2041.

[24] B. Zhang, Y. Gao, S. Zhao, J. Liu, Local derivative pattern versus localbinary pattern: face recognition with higher-order local patterndescriptor, IEEE Trans. Image Process. 19 (2) (2010) 533–544.

[25] X. Tan, B. Triggs, Enhanced local texture feature sets for facerecognition under difficult lighting conditions, IEEE Trans. ImageProcess. 19 (6) (2010) 1635–1650.

[26] J. Ning, L. Zhang, D. Zhang, W. Chengke, Robust object tracking usingjoint color-texture histogram, Int. J. Pattern Recognition Artif. Intell.23 (7) (2009) 1245–1263.

[27] Murala Subrahmanyam, R.P. Maheshwari, R. Balasubramanian, Localmaximum edge binary patterns: a new descriptor for image retrie-val and object tracking, Signal Process. 92 (2012) 1467–1479.

[28] Subrahmanyam Murala, R.P. Maheshwari, R. Balasubramanian,Directional local extrema patterns: a new descriptor for contentbased image retrieval, Int. J. Multimedia Inf. Retr. 1 (3) (2012)191–203.

[29] Subrahmanyam Murala, R.P. Maheshwari, R. Balasubramanian,Directional binary wavelet patterns for biomedical image indexingand retrieval, J. Med. Syst. 36 (5) (2012) 2865–2879.

[30] Valtteri Takala, Timo Ahonen, Matti Pietikainen, Block-based Meth-ods for Image Retrieval Using Local Binary Patterns, SCIA 2005,LNCS, vol. 3450, 2005, pp. 882–891.

[31] Cheng-Hao Yao, Shu-Yuan Chen, Retrieval of translated, rotated andscaled color textures, Pattern Recognition 36 (2003) 913–929.

[32] S. Peng, D. Kim, S. Lee, M. Lim, Texture feature extraction onuniformity estimation for local brightness and structure in chestCT images, J. Comput. Biol. Med. 40 (2010) 931–942.

[33] Devrim Unay, Ahmet Ekin, Radu S. Jasinschi, Local structure-basedregion-of-interest retrieval in brain MR images, IEEE Trans. Inf.Technol. Biomed. 14 (4) (2010) 897–903.

[34] Lauge Sørensen, Saher B. Shaker, Marleen de Bruijne, Quantitativeanalysis of pulmonary emphysema using local binary patterns, IEEETrans. Med. Imaging 29 (2) (2010) 559–569.

[35] Subrahmanyam Murala, Q.M. Jonathan Wu, Local Ternary Co-occurrence Patterns: A New Feature Descriptor for MRI and CTImage Retrieval, 2013 http://dx.doi.org/10.1016/j.neucom.-2013.03.018.

[36] Subrahmanyam Murala, Q.M. Jonathan Wu, Peak Valley Edge Pat-terns: A New Descriptor for Biomedical Image Indexing and Retrie-val, Big Data Computer Vision 2013, Portland, Orgon, USA, 2013.

[37] Marko Heikkila Matti Pietikainen, Cordelia Schmid, Description ofinterest regions with local binary patterns, Pattern Recognition 42(2009) 425–436.

[38] Baopu Li, Max Q.-H. Meng, Tumor recognition in wireless capsuleendoscopy images using textural features and SVM-based featureselection, IEEE Trans. Inf. Technol. Biomed. 16 (3) (2012) 323–329.

[39] Baopu Li, Max Q.-H. Meng, Computer-aided detection of bleedingregions for capsule endoscopy images, IEEE Trans. Biomed. Eng. 56(4) (2009) 1032–1039.

[40] Shu He, John J. Soraghan, Brian F. O0Reilly, Dongshan Xing, Quanti-tative analysis of facial paralysis using local binary patterns inbiomedical videos, IEEE Trans. Biomed. Eng. 56 (7) (2009)1864–1870.

[41] D.S. Marcus, T.H. Wang, J. Parker, J.G. Csernansky, J.C. Morris, R.L. Buckner, Open access series of imaging studies (OASIS): crosssectionalMRI data in young, middle aged, nondemented, and demented olderadults, J. Cognit. Neurosci. 19 (9) (2007) 1498–1507.

[42] VIA/I-ELCAP CT Lung Image Dataset, Available [online] ⟨http://www.via.cornell.edu/-databases/lungdb.html⟩.