local and global features based image retrieval system using orthogonal radial moments

13
Local and global features based image retrieval system using orthogonal radial moments Chandan Singh n , Pooja 1 Department of Computer Science, Punjabi University, Patiala 147002, Punjab, India article info Article history: Received 25 August 2011 Received in revised form 19 October 2011 Accepted 18 November 2011 Available online 12 December 2011 Keywords: Zernike moments Pseudo Zernike moments Orthogonal Fourier Mellin moments Precision and recall abstract Orthogonal radial moments such as Zernike moments, pseudo Zernike moments, and orthogonal Fourier Mellin moments have been studied extensively in the literature. In conventional methods of moment computation, the entire image intensity function is projected upon orthogonal moment polynomials to compute the global features. In this paper, we provide a novel methodology for the computation of these moments, which formulate them as effective local descriptors. The moment functions of edge images are computed to determine the local change in images, which provide the local aspects of the image. Thus, the obtained local and global moment features are combined to evaluate their performance on the image retrieval system. The moments ZMs, PZMs, and OFMMs, are compared in terms of their image retrieval effectiveness. The experiments are performed on various databases to examine the performance of the system in diverse circumstances such as images affected with noise, partial occlusion, distortion, complex structure, etc. The experiments results reveal that the proposed system outperforms the existing recent approaches to moments computation for image retrieval. & 2011 Elsevier Ltd. All rights reserved. 1. Introduction Images have always been considered an effective medium for presenting visual data in many applications of industry and academia. With the development in technology, a large amount of images is being generated every day. Therefore, managing and indexing of images become essential in order to retrieve similar images effectively. In conventional systems, images are generally indexed with textual annotations. However, as the database grows larger, the use of keywords based methods to retrieve a particular image becomes inefficient. Besides, skilled manual labor is required to annotate every single image with appropriate keywords. As a consequence, it becomes a time-consuming and tedious task. Even the human perception does not meet the images retrieved by keywords based system, which is not capable of representing image contents. To prevail on these limitations, various content based image retrieval (CBIR) systems have been proposed in the literature [14]. In this paper, we deal with CBIR to analyze techniques that retrieve images based on their visual attributes such as color [5], texture [6], and shape [7,8]. Among them low level shape attribute provides a persuasive notion to object individuality, which meets the human perception. Never- theless, image retrieval based on shape attribute remains a difficult task. Therefore, the requirement for a virtuous feature extractor arises, which must be capable of representing an object as a point in a finite dimensional space, i.e., feature space. By feature space, we mean that different views (rotation, scale, and translation) of the object shape correspond to the same point. The representation of image as a feature space has several advantages. First, if the feature space is chosen cautiously, we can obtain features, which are insensitive to some photometric transforma- tions such as noise and occlusion. Second, we obtain a reduction of dimensionality without losing original salient information of the object under consideration. There are several methods to obtain feature space from the shape of an object. Shape can be represented by two types of descriptors contour based descriptors and region based descriptors. Contour based descriptors are associated with the outline or boundary of the shape, which represent local characteristics of an image. These include Fourier descriptors [9], curvature scale space [10], histograms of centroid distances [11], chain codes [12], elastic matching [13], wavelet-Fourier descriptors [14], etc. However, the contour based descriptors extract features only from the boundary of the shape and neglect essential information contained in the shape interior region. On the other side, region based descriptors represent the global aspects of the shape and provide features extracted from the whole image. Region based descriptors include feature matching using moments functions. Describing images with Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/optlaseng Optics and Lasers in Engineering 0143-8166/$ - see front matter & 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.optlaseng.2011.11.012 n Corresponding author. Tel.: þ91 175 3046320, mobile: þ91 9872043209; fax: þ91 175 3046313. E-mail addresses: [email protected] (C. Singh), [email protected] (Pooja). 1 Mobile: þ91 9888947360. Optics and Lasers in Engineering 50 (2012) 655–667

Upload: chandan-singh

Post on 11-Sep-2016

214 views

Category:

Documents


2 download

TRANSCRIPT

Optics and Lasers in Engineering 50 (2012) 655–667

Contents lists available at SciVerse ScienceDirect

Optics and Lasers in Engineering

0143-81

doi:10.1

n Corr

fax: þ9

E-m

sharma1 M

journal homepage: www.elsevier.com/locate/optlaseng

Local and global features based image retrieval system using orthogonalradial moments

Chandan Singh n, Pooja 1

Department of Computer Science, Punjabi University, Patiala 147002, Punjab, India

a r t i c l e i n f o

Article history:

Received 25 August 2011

Received in revised form

19 October 2011

Accepted 18 November 2011Available online 12 December 2011

Keywords:

Zernike moments

Pseudo Zernike moments

Orthogonal Fourier Mellin moments

Precision and recall

66/$ - see front matter & 2011 Elsevier Ltd. A

016/j.optlaseng.2011.11.012

esponding author. Tel.: þ91 175 3046320,

1 175 3046313.

ail addresses: [email protected] (C. Sin

[email protected] (Pooja).

obile: þ91 9888947360.

a b s t r a c t

Orthogonal radial moments such as Zernike moments, pseudo Zernike moments, and orthogonal

Fourier Mellin moments have been studied extensively in the literature. In conventional methods of

moment computation, the entire image intensity function is projected upon orthogonal moment

polynomials to compute the global features. In this paper, we provide a novel methodology for the

computation of these moments, which formulate them as effective local descriptors. The moment

functions of edge images are computed to determine the local change in images, which provide the

local aspects of the image. Thus, the obtained local and global moment features are combined to

evaluate their performance on the image retrieval system. The moments ZMs, PZMs, and OFMMs, are

compared in terms of their image retrieval effectiveness. The experiments are performed on various

databases to examine the performance of the system in diverse circumstances such as images affected

with noise, partial occlusion, distortion, complex structure, etc. The experiments results reveal that the

proposed system outperforms the existing recent approaches to moments computation for image

retrieval.

& 2011 Elsevier Ltd. All rights reserved.

1. Introduction

Images have always been considered an effective medium forpresenting visual data in many applications of industry andacademia. With the development in technology, a large amountof images is being generated every day. Therefore, managing andindexing of images become essential in order to retrieve similarimages effectively. In conventional systems, images are generallyindexed with textual annotations. However, as the databasegrows larger, the use of keywords based methods to retrieve aparticular image becomes inefficient. Besides, skilled manuallabor is required to annotate every single image with appropriatekeywords. As a consequence, it becomes a time-consuming andtedious task. Even the human perception does not meet theimages retrieved by keywords based system, which is not capableof representing image contents. To prevail on these limitations,various content based image retrieval (CBIR) systems have beenproposed in the literature [1–4]. In this paper, we deal with CBIRto analyze techniques that retrieve images based on their visualattributes such as color [5], texture [6], and shape [7,8]. Amongthem low level shape attribute provides a persuasive notion to

ll rights reserved.

mobile: þ91 9872043209;

gh),

object individuality, which meets the human perception. Never-theless, image retrieval based on shape attribute remains adifficult task. Therefore, the requirement for a virtuous featureextractor arises, which must be capable of representing an objectas a point in a finite dimensional space, i.e., feature space. Byfeature space, we mean that different views (rotation, scale, andtranslation) of the object shape correspond to the same point. Therepresentation of image as a feature space has several advantages.First, if the feature space is chosen cautiously, we can obtainfeatures, which are insensitive to some photometric transforma-tions such as noise and occlusion. Second, we obtain a reductionof dimensionality without losing original salient information ofthe object under consideration.

There are several methods to obtain feature space from theshape of an object. Shape can be represented by two types ofdescriptors contour based descriptors and region based descriptors.Contour based descriptors are associated with the outline orboundary of the shape, which represent local characteristics of animage. These include Fourier descriptors [9], curvature scale space[10], histograms of centroid distances [11], chain codes [12], elasticmatching [13], wavelet-Fourier descriptors [14], etc. However, thecontour based descriptors extract features only from the boundaryof the shape and neglect essential information contained in theshape interior region. On the other side, region based descriptorsrepresent the global aspects of the shape and provide featuresextracted from the whole image. Region based descriptors includefeature matching using moments functions. Describing images with

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667656

moments instead of other more commonly used image features,means that global properties of the image are used rather than localproperties. The first significant work considering moments forpattern recognition was performed by Hu [15]. A set of sevenmoment invariants has been derived, using non-linear combina-tions of geometric moments. These invariants remain the sameunder image translation, rotation, and scaling. The main problemwith moment invariants is that only a few invariants can be derivedfrom the lower order moments, which are insufficient to representan image accurately. The lack of orthogonality in geometricmoments corresponds to high correlation among moments, whichleads to more redundant information of the image.

To address these issues, other moment functions came intoexistence, which are complex in nature. The magnitudes of complexmoments are rotation invariant whereas their phase coefficientschange with image rotation, and they satisfy the orthogonalityprinciple. By orthogonality we mean the decomposition of an objectinto uncorrelated components to simplify its analysis. In imagedescription, various orthogonal radial moments have been proposedfor representing and describing images, which are Zernike momentsZMs [16], pseudo Zernike moments PZMs [17], orthogonal FourierMellin moments OFMMs [18], radial harmonic Fourier moments[19], and Chebyshev Fourier moments [20]. A set of orthogonal ZMsare introduced by Teague [16] that are less sensitive to noise withsuperior image representation capability. ZMs have rotation invar-iance property in a continuous domain. They are widely used inpattern recognition applications [21], image reconstruction [22],image segmentation [23], edge detection [24], watermarking [25],face recognition [26], content based image retrieval [27], palm printverification [28], etc. PZMs are introduced by Bhatia and Wolf [17],which are another class of circularly orthogonal moments, whichpossess properties similar to ZMs. Teh and Chin [29] observed thatPZMs are more robust to noise. However, they are more computa-tion intensive. OFMMs introduced by Sheng and Shen [18], areanother set of rotation invariant moments. OFMMs have betterperformance than ZMs and PZMs in terms of noise sensitivity forsmall images. Other radial moments such as radial harmonic Fouriermoments [19] and Chebyshev–Fourier moments [20], acquire rela-tively less exposure due to their less image representation capabilityas compared to ZMs, PZMs, and OFMMs. Besides these momentsZMs, PZMs, and OFMMs are utilized most frequently in the literaturefor image shape analysis. Therefore, in our approach, we considerthese three orthogonal radial moments and compare them in termsof image retrieval. However, in all these approaches, the magnitudesof moments are used by considering the image as a whole, whichmakes them global shape descriptors. In some recent methods,complex Zernike moments (CZM) [30], adjacent phase baseddescriptor [31], and retrieval based on optimal similarity [32], phasecoefficients along with magnitude of ZMs are also considered forsimilarity matching among query and database images. We knowthat, the phase coefficients of ZMs, PZMs, and OFMMs are notrotation invariant. Therefore, additional effort is required to makethem invariant to rotation. This is a computation intensive task.Moreover, the finer details in the image are not extracted fordescribing the local change in an image, which are also essentialpart that is performed by the contour based descriptors.

In this paper, we propose a novel approach, which makes radialmoments capable of extracting global as well as local magnitudefeatures from images. Since global and local features are complimen-tary to each other, the main concept is to combine both global andlocal features that are extracted using moments and improve theeffectiveness of the image retrieval system. The local features areextracted by considering the moments of edge image and therefore,altering the computation of moments from region point mapping tocontour point mapping, which is described in the later section of thispaper. Hence, in the proposed system, we combine both local and

global moments feature spaces to improve the effectiveness of thesystem. The superiority of the proposed system is analyzed byconsidering various large databases representing partial occluded,rotated, scaled, noise affected, subject change images, etc. The resultsreveal that the proposed approach outperforms the traditionalapproach of orthogonal radial moments. While comparing ZMs, PZMs,and OFMMs, it is observed that the performance of ZMs is superior toPZMs and OFMMs. Therefore, we also compare the proposed ZMswith other state of the art approaches [30–32] to image retrieval,which are based on ZMs. The major contributions of this paperinclude the following:

1.

To propose a novel shape descriptor, which represents localfeatures of images, which are extracted using ZMs, PZMs,and OFMMs.

2.

To combine the global and local features to improve theretrieval rate of the system, which make ZMs, PZMs, andOFMMs local as well as global shape descriptors.

3.

To evaluate system performance for various sorts of databasesfor its robustness and scalability.

4.

To compare the performance of ZMs, PZMs, and OFMMs in termsof image retrieval accuracy based on the proposed approach.

The rest of the paper is organized as follows: Section 2provides the description of orthogonal radial moments ZMs,PZMs, and OFMMs. In Section 3 the computational frameworkfor the computation of moments is given. Section 4 introduces theproposed local descriptor based on moments. In Section 5,similarity matching classifier is described. Section 6 elaboratesthe experimental results and performance evaluation, and Section7 provides discussion and conclusion.

2. Description of orthogonal radial moments ZMs, PZMs, andOFMMs

2.1. Zernike moments

This set of orthogonal functions has been introduced by Zernike[33] as a basic tool for representation of a wavefront function foroptical systems with circular pupils. Since then the radial poly-nomials have been found important in applications ranging frompattern recognition, shape analysis, optical engineering, medicalimaging to eye diagnostic, etc. Teague [16] presented ZMs inimage analysis as a set of complex orthogonal functions with theirmagnitude coefficients exhibiting rotation invariance property.Since ZMs satisfy orthogonal property, by virtue of which thecontribution of each moment coefficient to the image is unique,and no redundancy occurs between moment features. Due tothese characteristics, the ZMs are used to describe the essentialfeatures of images. The set of orthogonal ZMs for an imageintensity function f ðr,yÞ with order p and repetition q are definedover a continuous unit disk 0rrr1, 0ryo2p [16]:

Zpq ¼pþ1

p

Z 2p

0

Z 1

0f ðr,yÞVn

pqðr,yÞrdrdy ð1Þ

where Vn

pqðr,yÞ is the complex conjugate of the Zernike polyno-mials Vpqðr,yÞ, defined as

Vpqðr,yÞ ¼ RpqðrÞejqy ð2Þ

where pZ0, 0r9q9rp, p�9q9¼ even, j¼ffiffiffiffiffiffiffi�1p

and y¼tan�1ðy=xÞ: The radial polynomials RpqðrÞ are defined by

RpqðrÞ ¼Xðp�9q9Þ=2

k ¼ 0

ð�1Þkðp�kÞ!

k!pþ 9q9

2 �k� �

!p�9q9

2 �k� �

!rp�2k ð3Þ

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667 657

The radial polynomials satisfy the orthogonality relation:Z 1

0RpqðrÞRp0qðrÞrdr¼ 1

2ðpþ1Þdpp0 ð4Þ

where dij is Kronecker delta. The set of Zernike polynomialsVpqðr,yÞ forms a complete orthogonal set within the unit disk asZ 2p

0

Z 1

0Vpq

Z 1

0ðr,yÞVn

p0q0 ðr,yÞrdrdy¼p

pþ1dpp0dqq0 ð5Þ

2.2. Pseudo Zernike moments

PZMs also belong to the class of circularly orthogonalmoments, which are defined over a unit disk [17]. These momentshave similar characteristics as ZMs and are widely used andstudied in the literature because of their minimum informationredundancy and immunity to noise. The computation cost ofPZMs is more than ZMs for the same order PZMs are observedto be more robust to image noise than ZMs [29]. In addition,PZMs provide twice the number of moments as provided by ZMsfor the same moment order. There are ð1þpmaxÞ

2 number ofPZMs as compared to ð1þpmaxÞ ð2þpmaxÞ=2 number of ZMs forthe same maximum order pmax. Thus, using the same maximumorder pmax, PZMs have more low-order moments than ZMs.Therefore, PZMs are less sensitive to image noise than ZMs. ThePZMs differ from ZMs in their real valued radial polynomialsdefined as [29]

RpqðrÞ ¼Xp� qj j

s ¼ 0

ð�1Þsð2pþ1�sÞ!rp�s

s!ðpþ9q9þ1�sÞ!ðp�9q9�sÞð6Þ

where pZ0, 0r9q9rp: The orthogonality principles for PZMsare similar to that of ZMs given by Eqs. (4) and (5).

2.3. Orthogonal Fourier Mellin moments

The new orthogonal radial polynomials have more zeros thando the Zernike radial polynomials in the region of small radialdistance [18]. The orthogonal Fourier–Mellin moments may bethought of as generalized Zernike moments and orthogonalizedcomplex moments. For small images, the description by theorthogonal Fourier–Mellin moments is better than that of theZernike moments in terms of image-reconstruction errors andsignal-to-noise ratio. The OFMMs basis functions also form a setof complete orthogonal functions over the unit disk and theydiffer from ZMs in their polynomials defined as

Vpqðr,yÞ ¼QpðrÞejqy ð7Þ

with constraints pZ0, 9q9Z0: The orthogonal radial polyno-mials are defined as

QpðrÞ ¼Xp

s ¼ 0

ð�1Þpþ s ðpþsþ1Þ!

ðp�sÞ!s!ðsþ1Þ!rs ð8Þ

QpðrÞ is orthogonal over the interval 0rrr1, i.e.,Z 1

0QpðrÞQkðrÞrdr¼ 1

2ðpþ1Þdpk ð9Þ

Orthogonality principles for OFMMs polynomials are similar tothat of ZMs given by Eq. (5).

2.4. Fast computation of moments

ZMs, PZMs, and OFMMs are computational intensive as theirradial polynomials given by Eqs. (3), (6), and (8) contain factorialterms and in case of ZMs and PZMs a polynomial of order p

contains q number of terms. The computation of factorial terms,powers of r, and trigonometric functions contribute significantlytowards the slow speed. The time complexity of the moments

turns out to be OðN2p3maxÞ, if we compute all moments up to order

pmax over an image of size N � N pixels. The order OðN2p3maxÞ is

very large when both N and pmax are large. Various fast methodsare in existence for reducing the time complexity [34–39]. Forimproving the computation efficiency of moments, we use

q�recursive method proposed by [34] and enhanced by Singhand Walia [35] for ZMs and OFMMs in which an attempt is made

to reduce the time complexity of both RpqðrÞ and e�jqy using

recursive relations for RpqðrÞ and for trigonometric functions from

OðN2p3maxÞ to OðN2p2

maxÞ. For the computation of ZMs, PZMs, and

OFMMs using recursive algorithms, one can refer [35–37]. There-

fore, we use the q�recursive methods for the computation of ZMs,PZMs, and OFMMs in experiments of the proposed approach.

3. Computational framework for moments

The image function f ðr,yÞ used in Eq. (1) for the computationof moments is defined in continuous domain. Let f ðx,yÞ be animage of size N � N pixels, then the zeroth order approximation ofEq. (1) is given by

Mpq ¼ lXN�1

i ¼ 0

XN�1

j ¼ 0

f ðxi,yjÞVn

pqðxi,yjÞðDxiDyjÞ ð10Þ

where l¼ pþ1p and x2

i þy2j r1:The coordinates ðxi, yjÞ in a unit disk

are given by

xi ¼2iþ1�N

D, yj ¼

2jþ1�N

Dð11Þ

where i,j¼ 0,1,2,:::,N�1, and

D¼N for inner circular disc contained in the square image

Nffiffiffi2p

for outer circular disc containing the whole square image

(

ð12Þ

and

Dxi ¼Dyj ¼2

Dð13Þ

It is observed that when D¼Nffiffiffi2p

, i.e., when the outer circulardisk is used, then the geometric error is reduced [40]. Therefore,in all the experiments we take D¼N

ffiffiffi2p

:

4. Proposed local descriptor based on orthogonal radialmoments

The orthogonal radial moments studied so far in the literature,are applied on the entire image function and do not consider thelocal change. They only provide the global aspect of the imageshape. However, the contribution of local change in an image islikewise important to acquire the finer details, which is acquiredby performing the computation on edge points of the image.Therefore, in the proposed approach, we develop a novel compu-tational framework for computing moments to determine thelocal discontinuities in the image. The complete methodology isdescribed as follows:

Edges represent the gray-scale discontinuities in an image.Since edges are one of the most effective features, detection ofthese discontinuities is an essential step in object recognition.In general, image contains noise. Therefore, to eliminatespurious edges, we use Canny edge detector [41], due to its

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667658

capability of low error rate, localized edge points, and singleedge point response. It provides the binary edge map.

� The centroid represents the center of mass of an image, which

is useful for image normalization and to make the momentstranslation invariant. Therefore, we compute the centroid ofimage ðxc ,ycÞ, which is taken as the center of the circle.ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiq

The maximum radius R¼ ðxi�xcÞ

2þðyi�ycÞ

2 of the circum-

scribed circle, which encloses the entire edge map, is com-puted by considering the farthest edge point ðxi,yiÞ from thecenter ðxc ,ycÞ.

� Since the computation of radial moments is performed by

mapping the actual coordinates onto the unit disk as given byEq. (11). The mapping for edge image is performed as follows:

xi ¼iþ0:5�xc

R, yl ¼

jþ0:5�yc

Rð14Þ

which is equivalent to

xi ¼2iþ1�2xc

2R, yj ¼

2jþ1�2yc

2Rð15Þ

By substituting xc ¼N=2, and, R¼Nffiffiffi2p

=2, in (15) for imageenclosed in an outer circular disk, we obtain

xi ¼2iþ1�N

Nffiffiffi2p , yj ¼

2jþ1�N

Nffiffiffi2p ð16Þ

Therefore, Eq. (14) is consistent with Eq. (11) with D¼Nffiffiffi2p

forimage enclosed in outer circular disk, and

l¼pþ1

pR2ð17Þ

The rest of the procedure is same as given by Eq. (10). Hence,using the mapping given by Eq. (14), we compute the moments ofedge image, which represent the local features.

102030405060708090

100

Prec

isio

n (%

)

ZMs (8)ZMs (10)ZMs (12)ZMs (14)ZMs (16)

Recall (%)10 20 30 40 50 60 70 80 90 100

102030405060708090

100

Prec

isio

n (%

)

OFMMs (4)OFMMs (5)OFMMs (6)OFMMs (7)OFMMs (8)

Reca10 20 30 40 50

Fig. 1. PR curves for (a) ZMs, (

4.1. Feature dimensionality

The utilization of moments up to a higher order generally leadsto a better image representation power. For selecting the appro-priate number of features, we perform experiments at variousmaximum orders of moments pmax for ZMs, PZMs, and OFMMs.For measuring the retrieval accuracy of the system, we considerthe precision and recall ðPRÞ performance measure. A precisionrate can be defined as the percent of retrieved images similar tothe query image among the total number of retrieved images. Arecall rate is defined as the percent of retrieved images, which aresimilar to the query image among the total number of imagessimilar to the query image in the database. It can be easily seenthat both precision and recall rates are functions of the totalnumber of retrieved images. In order to have high retrievalaccuracy, the system needs to have both high precision and highrecall rates. For a query image Q , the precision and recall arecomputed in percentage as follows:

P¼nQ

TQ� 100, R¼

nQ

DQ� 100 ð18Þ

where nQ represents the number of similar images retrieved fromthe database, TQ represents total number of images retrieved, andDQ represents number of images in the database similar to queryimage Q . The ZMs are evaluated at pmax ¼ 8,10,12,14, and 16:The PR curves for Kimia-99 database are given by Fig. 1(a), whichdemonstrates that with the increase of moment order, slightimprovement is observed in the retrieval accuracy of the ZMsfrom pmax ¼ 8 through pmax ¼ 16: The time taken for the compu-tation of ZMs for pmax ¼ 8,10,12,14, and 16 is 0.011 s, 0.014 s,0.02 s, 0.025 s, and 0.029 s, respectively. It can be seen that thetime difference for ZMs calculation for pmax ¼ 8 is 0.011 s,whereas the same for pmax ¼ 16 is 0.029 s. The time difference is

102030405060708090

100

Prec

isio

n (%

)

PZMs (6)PZMs (7)PZMs (8)PZMs (9)PZMs (10)

Recall (%)10 20 30 40 50 60 70 80 90 100

ll (%)60 70 80 90 100

b) PZMs, and (c) OFMMs.

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667 659

approximately 62.06%, which is very large as compared to theimproved accuracy approximately 2.5%. Therefore, the selectionof pmax ¼ 12 is a tradeoff between computational complexity andretrieval accuracy. The next selection is made for PZMs byexperimenting at orders of moments pmax ¼ 6,7,8,9, and 10:The corresponding CPU elapsed time for pmax ¼ 6,7,8,9, and 10is 0.012 s, 0.015 s, 0.018 s, 0.021 s, and 0.025 s, respectively. It canbe seen from Fig. 1(b) that minor progress is perceived in retrievalaccuracy by varying pmax ¼ 6 through pmax ¼ 10 and the timedifference is 52% for the same pmax. Therefore, we choose pmax ¼ 8for making a balance between retrieval accuracy and computa-tional complexity. The similar experiments are performed forOFMMs as shown in Fig. 1(c). The retrieval accuracy and CPU timeperformance for pmax ¼ 4,5,6,7, and 8 are 0.007 s, 0.011 s,0.014 s, 0.018 s, and 0.021 s, respectively. The similar trend isobserved for retrieval accuracy and the time difference is 66.66%for pmax ¼ 4 and pmax ¼ 8. Therefore, we select pmax ¼ 6 as themost favorable maximum order to have poise between retrievalaccuracy and the computation complexity. Another reason forchoosing pmax ¼ 12, pmax ¼ 8 and pmax ¼ 6 for ZMs, PZMs, andOFMMs, respectively, is that the total number of momentspossesses almost similar number of features F. For ZMs F ¼ 47,for PZMs F ¼ 43, and for OFMMs F ¼ 47 by excluding the momentsM0,0 and M1,1 from the features set as M0,0 indicates an averagegray value of image and M1,1 is the first order moment, which iszero if the centroid of the image falls on the center of the disk. The

Table 1Number of moments for pmax ¼ 12 for ZMs.

pmax ZMs F

2 M2,0 M2,2 2

3 M3,1 M3,3 4

4 M4,0 M4,2 M4,4 7

5 M5,1 M5,3 M5,5 10

6 M6,0 M6,2 M6,4 M6,6 14

7 M7,1 M7,3 M7,5 M7,7 18

8 M8,0 M8,2 M8,4 M8,6 M8,8 23

9 M9,1 M9,3 M9,5 M9,7 M9,9 28

10 M10,0 M10,2 M10,4 M10,6 M10,8 M10,10 34

11 M11,1 M11,3 M11,5 M11,7 M11,9 M11,11 40

12 M12,0 M12,2 M12,4 M12,6 M12,8 M12,10 M12,12 47

Table 2Number of moments for pmax ¼ 8 for PZMs.

pmax PZMs F

1 M1,0 1

2 M2,0 M2,1 M2,2 4

3 M3,0 M3,1 M3,2 M3,3 8

4 M4,0 M4,1 M4,2 M4,3 M4,4 13

5 M5,0 M5,1 M5,2 M5,3 M5,4 M5,5 19

6 M6,0 M6,1 M6,2 M6,3 M6,4 M6,5 M6,6 26

7 M7,0 M7,1 M7,2 M7,3 M7,4 M7,5 M7,6 M7,7 34

8 M8,0 M8,1 M8,2 M8,3 M8,4 M8,5 M8,6 M8,7 M8,8 43

Table 3Number of moments for pmax ¼ 6 for OFMMs.

pmax OFMMs F

0 M0,1 M0,2 M0,3 M0,4 M0,5 M0,6 6

1 M1,0 M1,2 M1,3 M1,4 M1,5 M1,6 12

2 M2,0 M2,1 M2,2 M2,3 M2,4 M2,5 M2,6 19

3 M3,0 M3,1 M3,2 M3,3 M3,4 M3,5 M3,6 26

4 M4,0 M4,1 M4,2 M4,3 M4,4 M4,5 M4, 6 33

5 M5,0 M5,1 M5,2 M5,3 M5,4 M5,5 M5,6 40

6 M6,0 M6,1 M6,2 M6,3 M6,4 M6,5 M6,6 47

moments used in the proposed system are given in Tables 1, 2,and 3 for ZMs, PZMs, and OFMMs, respectively.

5. Similarity matching

Similarity measure scheme is quite significant to the recogni-tion and retrieval results. In the proposed approach, we use twokinds of features: the first set of features is extracted usingmoments of the region of the shape, i.e., the entire image. Anotherset of features is extracted using the moments of the contourimage as described in Section 4. In the process of obtaining thesimilarity between query image Q and database image D, Eucli-dean distance metric has been used. Mathematically, the similar-ity metric for the proposed approach is given as follows:

dEDr ðQ ,DÞ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiXF

i ¼ 0

ððMQi Þ�ðM

Di ÞÞ

2

vuut

dEDc ðQ ,DÞ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiXF

i ¼ 0

ððMQi Þ�ðM

Di ÞÞ

2

vuutdEDðQ ,DÞ ¼ dED

r ðQ ,DÞþdEDc ðQ ,DÞ ð19Þ

where dEDr ðQ ,DÞ and dED

c ðQ ,DÞ represent the Euclidean distancesimilarity metric for the region based features and contour basedfeatures, respectively. dED

ðQ ,DÞ represents the overall similaritymeasure by combining the region and contour based metricdistances. The total number of features F, for comparison purposeof ZMs, PZMs, and OFMMs, are given in the previous section inTables 1, 2, and 3, respectively.

6. Experimental results and performance evaluation

We evaluate the system performance by performing an exten-sive set of experiments. The detailed experiments are carried outto compare the performance of the proposed approach with thetraditional ones. The retrieval performance of ZMs, PZMs, andOFMMs is also compared with each other. Experiments areperformed on an Intel Pentium Core 2 Duo 2.10 GHz processorwith 3 GB RAM. Algorithms are implemented in VCþþ 9.0. Inorder to evaluate the system performance in terms of robustnessand scalability, we consider large databases with various sorts ofimages, which demonstrate rotation, scale, partial occlusion,noise, image type (binary and gray), and complex structuredimages. We consider following databases to analyze the systemperformance under various circumstances:

MPEG-7 CE shape-1 part B: It consists of 1400 images contain-ing 70 classes of images with 20 instances in each class. Thisdatabase represents significant variations within instances ofa class. � Columbia object image library (COIL-100): This database con-

tains color images of 100 classes of objects with 72 samples ineach class, taken with pose variation from 01 through 3601

with interval of 51. In our experiments, we convert them togray-scale and choose 10 samples per class with pose variationfrom 01 through 451, thereby creating a sub database of 1000images, which signifies that various views have significantaffect on shape, scale, and orientation of objects.

� Kimia-99: It contains 9 classes of binary images with 11

instances in each class. The 9 classes consist of planes, fishes,hands, rabbits, etc. The variations include distortions, partialocclusion by other objects, different poses, styles, etc.

� Trademark: 20 trademark images with complex inner struc-

ture are collected from the Internet, which are then resized

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667660

and oriented to five different sizes (64� 64, 80� 80, 96� 96,112� 112, and 128� 128) and to five different orientations(01, 361, 721, 1081, and 1441). Thus, the database consists of 25instances per class and a total of 500 images.

� Noise: 70 images are randomly chosen from each class of MPEG-7

database and Gaussian noise is added by setting mean m¼ 0 andstandard deviation s varying between 0:0 and 0:19 with anincrement of 0:01, creating a database of 1400 images, wherethe image with m¼ 0:0, s¼ 0:0 represents the original image.

Some of the instances of database images are given in Fig. 2.

6.1. Retrieval performance comparison and results

In the proposed solution, both the local and global featurespaces of images are extracted using ZMs, PZMs, and OFMMs. The

MPEG-7

COIL-100

Kimia-99

Trademark

Noise

Fig. 2. Some instances of region and ed

local feature space is extracted using the method described inSection 4, and global feature space is extracted by exploiting themethods as described in Section 3, for ZMs, PZMs, and OFMMs.Then both the local and global features are coalesced to obtain theimproved retrieval rate as compared to the traditional ones,where only global features are considered for evaluating theretrieval performance. Euclidean distance metric is used as aclassifier for computing the similarity among images. Here in thissection, we compare the performance of the proposed approach(globalþ local) of ZMs, PZMs, and OFMMs with their respectiveglobal features and local features in two sets of plots of PR curves,individually. In order to review the system performance for eachtype of image, all the images in the database are used as queryimages. Since five different sorts of databases are considered forassessing the usefulness of the proposed approach, we performfive tests. In the first test, the system performance is evaluated for

ge images from the five databases.

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667 661

MPEG-7 database. This database represents a large variationamong the instances of similar class. The retrieval results usingZMs, PZMs, and OFMMs using global features and the proposedapproach (globalþ local) are given in Fig. 3(a). It is observed thatby utilizing the proposed approach, the retrieval accuracy isimproved for all the three moments ZMs, PZMs, and OFMMs.One can see a high improvement in the retrieval accuracy ofOFMMs by combining both local and global features. Whilecomparing the retrieval results of the proposed ZMs, PZMs, and

10

20

30

40

50

60

70

80

90

100

Prec

isio

n (%

)

Recall (%)ZMs (global) ZMs (proposed)

PZMs (global) PZMs (proposed)

OFMMs (global) OFMMs (proposed)

10

20

30

40

50

60

70

80

90

100

Prec

isio

n (%

)

ZMs (local) ZMs (proposed)

PZMs (local) PZMs (proposed)

OFMMs (local) OFMMs (proposed)

10 20 30 40 50 60 70 80

Recall (%)10 20 30 40 50 60 70 80

Fig. 3. Comparison of the PR performance of the proposed (globalþ local) ZMs,

PZMs, and OFMMs over MPEG-7 CE shape-1 part B database with (a) global

features only, (b) local features only.

OFMMs, we see that the ZMs possess the highest accuracyfollowed by PZMs and OFMMs. The comparison of the localfeatures of ZMs, PZMs, and OFMMs and the proposed methodare given in Fig. 3(b), which apparently depicts the improvementin retrieval rate for all the three methods by merging both localand global features rather than using local features only.

The second test is performed for COIL-100 database. Theresults are depicted in Fig. 4(a) and (b) for comparison of theproposed approach with global features and local features,

ZMs (global) ZMs (proposed)

PZMs (global) PZMs (proposed)

OFMMs (global) OFMMs (proposed)

ZMs (local) ZMs (proposed)

PZMs (local) PZMs (proposed)

OFMMs (local) OFMMs (proposed)

10

20

30

40

50

60

70

80

90

100

Prec

isio

n (%

)

Recall (%)10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

Prec

isio

n (%

)

Recall (%)10 20 30 40 50 60 70 80 90 100

Fig. 4. Comparison of the PR performance of the proposed (globalþ local) ZMs,

PZMs, and OFMMs over COIL-100 database with (a) global features only, (b) local

features only.

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667662

respectively. An improvement is observed in the retrieval accu-racy for all three ZMs, PZMs, and OFMMs as compared to thetraditional ones. While considering the performance of momentswith each other, proposed ZMs and PZMs have almost the similarresults with slight improvement in ZMs. Although the proposedOFMMs improve from its traditional one, its performance is stillpoorer than ZMs and PZMs as can be seen from Fig. 4(a). FromFig. 4(b), we observe that the performance of local features ofOFMMs is lesser than that of PZMs and ZMs. However, theproposed method helps improve the retrieval rate. The third test

10

20

30

40

50

60

70

80

90

100

Prec

isio

n (%

)

Recall (%)ZMs (global) ZMs (proposed)

PZMs (global) PZMs (proposed)

OFMMs (global) OFMMs (proposed)

10

20

30

40

50

60

70

80

90

100

Prec

isio

n (%

)

ZMs (local) ZMs (proposed)

PZMs (local) PZMs (proposed)

OFMMs (local) OFMMs (proposed)

10 20 30 40 50 60 70 80 90 100

Recall (%)10 20 30 40 50 60 70 80 90 100

Fig. 5. Comparison of the PR performance of the proposed (globalþ local) ZMs,

PZMs, and OFMMs over Kimia-99 database with (a) global features only, (b) local

features only.

is performed for Kimia-99 database, which contains distorted,partial occluded shapes. The PR performance is depicted inFig. 5(a) and (b). An improvement is observed for all the threetypes of moments, which overpower both global and localapproaches of moment calculation. However, it is observed thatZMs exhibit higher retrieval accuracy than that of PZMs andOFMMs. The PZMs and OFMMs symbolize the similar retrievalrate and their PR curves coincide with each other as shown inFig. 5(a). From Fig. 5(b), we see that the PR curves of improvedPZMs and OFMMs coincide with the PR curves of ZMs(local).

ZMs (global) ZMs (proposed)

PZMs (global) PZMs (proposed)

OFMMs (global) OFMMs (proposed)

ZMs (local) ZMs (proposed)

PZMs (local) PZMs (proposed)

OFMMs (local) OFMMs (proposed)

10

20

30

40

50

60

70

80

90

100

Prec

isio

n (%

)

Recall (%)10 20 30 40 50 60 70 80 90 100

10

20

30

40

50

60

70

80

90

100

Prec

isio

n (%

)

Recall (%)10 20 30 40 50 60 70 80 90 100

Fig. 6. Comparison of the PR performance of the proposed (globalþ local) ZMs,

PZMs, and OFMMs over Trademark database with (a) global features only, (b) local

features only.

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667 663

It apparently demonstrates the superior performance of ZMs overother moments. The next test, i.e., the fourth test is performed forTrademark database. The images in this database representgeometric transformations such as rotation and scale with com-plex inner structure. The performance of the proposed system forthis database is given in Fig. 6(a) and (b), which signifiesextremely high retrieval accuracy for all the three radial momentswith slightly less retrieval rate of PZMs and OFMMs. It is worth tomention here that by merging both local and global features theretrieval rate of PZMs and OFMMs improves and their PR curves

10

20

30

40

50

60

70

80

90

100

Prec

isio

n (%

)

Recall (%)

ZMs (global) ZMs (proposed)

PZMs (global) PZMs (proposed)

OFMMs (global) OFMMs (proposed)

10

20

30

40

50

60

70

80

90

100

Prec

isio

n (%

)

ZMs (local) ZMs (proposed)

PZMs (local) PZMs (proposed)

OFMMs (local) OFMMs (proposed)

10 20 30 40 50 60 70 80 90 100

Recall (%)10 20 30 40 50 60 70 80 90 100

Fig. 7. Comparison of the PR performance of the proposed (globalþ local) ZMs,

PZMs, and OFMMs over Noise database with (a) global features only, (b) local

features only.

amalgamate with that of ZMs. The fifth test is performed for Noisedatabase, and the results are given in Fig. 7(a) and (b). Theeffectiveness of the proposed approach is apparent from theirPR curves. However, it is worth noticing from the PR curves thatPZMs and OFMMs are performing superior to ZMs and PZMsoutperforms the other moments.

6.2. Correct retrieval performance

The top 20 retrieval results for the query image ‘‘pocket’’ takenfrom MPEG-7 using ZMs, PZMs, and OFMMs are given in Fig. 8.We can see that the ZMs retrieve 19 relevant images to the queryimage. PZMs and OFMMs retrieve 17 and 13 relevant images,respectively, where PZMs diverge from its objective at rank 16and OFMMs at rank 13. The top 10 retrieval results for a queryimage taken from the COIL-100 database are presented in Fig. 9,which demonstrates that ZMs retrieve all 10 relevant images tothe query image. On the other hand, PZMs and OFMMs retrieve9 and 8 relevant images, respectively. The top 11 retrieval resultsfor a query image taken from the Kimia-99 database are given inFig. 10, which signifies that ZMs, PZMs, and OFMMs retrieve 10, 7,and 7 relevant images, respectively. The irrelevant images areshaded in gray in Figs. 8–10. In Fig. 11, we present the correctretrievals by three moments for Kimia-99 database. Since thisdatabase contains nine classes with eleven instances in each class,there are nine histograms in the figure corresponding to eachclass. The name of each class is mentioned on the x-axis. Thetenth histogram represents the average number of correct retrie-vals by ZMs, PZMs, and OFMMs, which represents that PZMs andOFMMs have similar retrieval performance and it is consistentwith the PR curves given in Fig. 5. However, for rabbit class, theretrieval performance of OFMMs is better than that of ZMs andPZMs. We also analyze the correct retrieval performance forMPEG-7 and COIL-100 databases, which is consistent with theperformance of their PR curves. Nevertheless, we provide theresults only for Kimia-99 database because MPEG-7 and COIL-100databases contain 70 and 100 classes of images, respectively,which is difficult to present as figure for all three moments due tospace issue.

6.3. Comparison with other methods

From all of the above experiments, it is observed that theproposed ZMs are performing better than PZMs and OFMMs,respectively. Therefore, now we compare the proposed ZMs withother ZMs based techniques given by [30–32]. We compare theretrieval performance for MPEG-7, Kimia-99, and COIL-100 data-bases and the respective results are given in Fig. 12(a) through (c).In all the three figures, it is apparent that the proposed solutionextremely outperforms other methods. The order of retrievalaccuracy is, the proposed solution followed by optimal similarity,adjacent phase, and CZM methods for all the three databases.

7. Discussion and conclusion

In the above sections, ZMs, PZMs, and OFMMs are described indetail, and various experiments are performed to evaluate theirperformance on the image retrieval system. The comparativeanalysis of ZMs, PZMs, and OFMMs is performed by experiment-ing on various sorts of databases representing various character-istics of images. Some points regarding their performance aregiven as follows:

ZMs, PZMs, and OFMMs are capable of representing an imageby virtue of which they are used for the task of image retrieval.

Queryimage

Radialmoments

Top 20 retrievals

ZMs

Rank 1 Rank 2 Rank 3 Rank 4 Rank 5 Rank 6 Rank 7 Rank 8 Rank 9 Rank 10

Rank11 Rank 12 Rank 13 Rank 14 Rank 15 Rank 16 Rank 17 Rank 18 Rank 19 Rank 20

PZMs

Rank 1 Rank 2 Rank 3 Rank 4 Rank 5 Rank 6 Rank 7 Rank 8 Rank 9 Rank 10

Rank11 Rank 12 Rank 13 Rank 14 Rank 15 Rank 16 Rank 17 Rank 18 Rank 19 Rank 20

OFMMs

Rank 1 Rank 2 Rank 3 Rank 4 Rank 5 Rank 6 Rank 7 Rank 8 Rank 9 Rank 10

Rank11 Rank 12 Rank 13 Rank 14 Rank 15 Rank 16 Rank 17 Rank 18 Rank 19 Rank 20

Fig. 8. Top 20 retrieval results using ZMs, PZMs, and OFMMs for MPEG-7 database.

Queryimage

Radialmoments

Top 10 retrievals

ZMs

Rank 1 Rank 2 Rank 3 Rank 4 Rank 5

Rank 6 Rank 7 Rank 8 Rank 9 Rank 10

PZMs

Rank 1 Rank 2 Rank 3 Rank 4 Rank 5

Rank 6 Rank 7 Rank 8 Rank 9 Rank 10

OFMMs

Rank 1 Rank 2 Rank 3 Rank 4 Rank 5

Rank 6 Rank 7 Rank 8 Rank 9 Rank 10

Fig. 9. Top 10 retrieval results using ZMs, PZMs, and OFMMs for COIL-100 database.

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667664

From the PR curves for MPEG-7 database, we observe that ZMshave the superior performance followed by PZMs and OFMMsand similar trend is observed for COIL-100 database. For theKimia database, which exhibits partial occluded and distortedshapes, the performance of PZMs is similar to that of OFMMs.However, ZMs outperform both of them. In case of Trademark

database, the performance of all the three kinds of moments isanalogous, which represents that all the three moments areinvariant to geometric transformations. Besides they are com-petent to identify complex shapes. While considering therobustness of these moments to noise, we see that all themoments are highly robust to noise affected images. However,

Query

image

Radial

momentsTop 11 retrievals

ZMs

Rank 1 Rank 2 Rank 3 Rank 4 Rank 5 Rank 6

Rank 7 Rank 8 Rank 9 Rank 10 Rank 11

PZMs

Rank 1 Rank 2 Rank 3 Rank 4 Rank 5 Rank 6

Rank 7 Rank 8 Rank 9 Rank 10 Rank 11

OFMMs

Rank 1 Rank 2 Rank 3 Rank 4 Rank 5 Rank 6

Rank 7 Rank 8 Rank 9 Rank 10 Rank 11

Fig. 10. Top 11 retrieval results using ZMs, PZMs, and OFMMs for Kimia-99 database.

Fig. 11. Correct retrievals for Kimia-99 database using ZMs, PZMs, and OFMMs.

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667 665

the performance of PZMs and OFMMs is slightly better thanthat of ZMs. This is due to the fact that the lower-ordermoments are more insensitive to noise. The retrieval perfor-mance of ZMs, PZMs, and OFMMs reveals that ZMs have morepotential to retrieve relevant images from the databasesfollowed by PZMs and OFMMs.

� While comparing the performance of proposed ZMs with other

state of the art approaches based on ZMs [30–32], we see thatthe proposed solution effectively supersedes them, by invol-ving much less effort as compared to other approaches.

Hence, in this paper, we provide a novel technique to enhance theretrieval accuracy of ZMs, PZMs, and OFMMs by considering the localchange in the images. Edges are one of the essential features ofimages, therefore, we compute features by considering the relation ofedge points with centroid of image using moments. The computedlocal features are then coalesced with global features, which regardorthogonal radial moments as global as well as local shape descrip-tors. The experimental results witness the improved image retrievalaccuracy as compared to the traditional ones for ZMs, PZMs, andOFMMs. While comparing all three moments with each other, we see

102030405060708090

100

10

Prec

isio

n (%

)

Recall (%)

MPEG - 7CZM

Optimal similarity

Adjacent phase

ZMs (Proposed)

102030405060708090

100

Prec

isio

n (%

)

Recall (%)

Kimia - 99

CZM

Optimal similarity

Adjacent phase

ZMs (Proposed)

102030405060708090

100110

Prec

isio

n (%

)

Recall (%)

COIL - 100

CZMOptimal similarityAdjacent phaseZMs (Proposed)

20 30 40 50 60 70 80

10 20 30 40 50 60 70 80 90 100

10 20 30 40 50 60 70 80 90 100

Fig. 12. Comparison of PR performance of the proposed ZMs with optimal similarity, adjacent phase, and CZM methods for (a) MPEG-7, (b) Kimia-99, and (c) COIL-100 databases.

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667666

that the performance of ZMs is superior to other moments. Thescalability and robustness of the system is also established using thegeometric transformed, noise affected, and large image databases.

Acknowledgments

The authors are thankful to the All India Council for TechnicalEducation (AICTE), Govt. of India, New Delhi, India, for supporting theresearch work vide their file number 8023/RID/BOR/RPS-77/2005-06.The second author is thankful to the University Grant Commission(UGC), New Delhi, India, for providing research fellowship for carryingout the research work leading to Ph.D. degree.

References

[1] Faloutsos C, Barber R, Flickner M, et al. Efficient and effective querying byimage content. J Intell Syst 1994;1:95–108.

[2] Flickner M, Sawhney H, Niblack W, et al. Query by image and video content:the QBIC system. IEEE Comput 1995;28(9):23–32.

[3] Pentland RP, Scalroff S. Photobooks: tools for content-based manipulation ofimage databases. In: Proceedings of the SPIE conference storage retrievalimage video databases II; 1994. p. 33–47.

[4] Mether M, Kankanhall MS, Lee WF. Content-based image retrieval using acomposite color-shape approach. Inf Process Manage 1998;34(1):109–20.

[5] Swain M, Ballard D. Color indexing. Int J Comput Vision 1991;7(1):11–32.

[6] Manjunath B, Ma W. Texture features for browsing and retrieval of imagedata. IEEE Trans Pattern Anal Mach Intell 1996;18:837–42.

[7] Gevers T, Smeulders AWM. The PicToSeekWWWimage search system. In:Proceedings of the IEEE international conference on multimedia computingand systems, vol. 1; 1999. p. 264–9.

[8] Pentland A, Picard RW, Sclaroff S. Photobook: content-based manipulation ofimage databases. Int J Comput Vision 1996;18(3):233–54.

[9] Zhang D, Lu G. A comparative study of curvature scale space and Fourierdescriptors for shape-based image retrieval. J Visual Commun Image Repre-sentation 2003;14:41–60.

[10] Mokhtarian F, Mackworth AK. A theory of multiscale, curvature based shaperepresentation for planar curves. IEEE Trans Pattern Anal Mach Intell1992;14:789–805.

[11] Zhang D, Lu G. A comparative study of Fourier descriptors for shaperepresentation and retrieval. In: Proceedings of the fifth Asian conferenceon computer vision (ACCV02); 2002.

C. Singh, Pooja / Optics and Lasers in Engineering 50 (2012) 655–667 667

[12] Dubois SR, Glanz FH. An autoregressive model approach to two dimensionalshape classification. IEEE Trans Pattern Anal Mach Intell 1986;8:55–65.

[13] Attalla E, Siy P. Robust shape similarity retrieval based on contour segmenta-tion polygonal multiresolution and elastic matching. Pattern Recognition2005;38:2229–41.

[14] Yadav RB, Nishcal NK, Gupta AK, Rastogi VK. Retrieval and classification ofshape-based objects using Fourier, generic Fourier, and wavelet-Fourierdescriptors technique: a comparative study. Opt Laser Eng 2007;45:695–708.

[15] Hu M-K. Visual pattern recognition by moment invariants. IRE Trans InfTheory 1962;8(2):179–87.

[16] Teague MR. Image analysis via the general theory of moments. J Opt Soc Am1980;70(8):920–30.

[17] Bhatia AB, Wolf E. On the circle polynomials of Zernike and relatedorthogonal sets. Proc Cambridge Philos Soc 1954;50:40–8.

[18] Sheng Y, Shen L. Orthogonal Fourier mellin moment for invariant patternrecognition. IEEE Trans J Opt Soc Am 1994;11(6):1748–57.

[19] Ren H, Liu A, Zou J, Bai D, Ping Z. Character reconstruction with radial harmonicFourier moments. In: Proceedings of the fourth international conference on fuzzysystems and knowledge discovery (FSKD07), vol. 3; 2007. p. 307–10.

[20] Ping ZL, Sheng YL. Describing images with Chebyshev moments. Acta Opt Sin2002;19(9):1748–54.

[21] Abu YS, Mostofa. Recognitive aspects of moment invariants. IEEE TransPattern Anal Mach Intell 1984;6(6):698–706.

[22] Pawlak M. On the reconstruction aspects of moment descriptors. IEEE TransInf Theory 1992;38(6):1698–708.

[23] Ghosal S, Mehrotra R. Segmentation of range images on orthogonal momentbased integrated approach. IEEE Trans Robotics Autom 1993;9(4):385–99.

[24] Ghosal S, Mehrotra R. Edge detection using orthogonal moment based operators.In: Proceedings of the 11th image, speech and signal analysis (IAPR), internationalconference on pattern recognition, vol. 3; 1992. p. 413–6.

[25] Xin Y, Liao S, Pawlak M. Geometrically robust image watermark via pseudoZernike moments. In: Proceedings of the Canadian conference on electricaland computer engineering, vol. 2; 2004. p. 939–42.

[26] Haddadnia J, Ahmadi M, Raahemifar K. An effective feature extractionmethod for face recognition. In: Proceedings of the international conferenceon image processing, vol. 3; 2003. p. 917–20.

[27] Singh C, Pooja. Improving image retrieval using combined features of Houghtransform and Zernike moments. Opt Laser Eng 2011;49(12):1384–96.

[28] Pang YH, David C-L, Andrew B-J, Hiew F-S. Palm print verification withmoments. J World’s Simplest Code Generator1213-6972 2003;12.

[29] Teh CH, Chin RT. On image analysis by the methods of moments. IEEE TransPattern Anal Mach Intell 1988;10(4):496–513.

[30] Li S, Lee M-C, Pun C-M. Complex Zernike moments features for shape basedimage retrieval. IEEE Trans Syst, Man Cybern—Part A: Syst Hum 2009;39(1):227–37.

[31] Chen Z, Sun S-K. A Zernike moment phase-based descriptor for local imagerepresentation and matching. IEEE Trans Image Process 2010;19(1):205–19.

[32] Revaud J, Lavoue G, Baskurt A. Improving Zernike moments comparison foroptimal similarity and rotation angle retrieval. IEEE Trans Pattern Anal MachIntell 2009;31(4):627–36.

[33] Zernike F. Beugungstheorie des Schneidenverfahrens und seiner verbessertenForm, der Phasenkontrastmethode. Physica 1934;1:689–701.

[34] Chong C-W, Paramesran R, Mukundan R. A comparative analysis of algo-rithms for fast computation of Zernike moments. Pattern Recognition2003;36:731–42.

[35] Singh C, Walia E. Algorithms for fast computation of Zernike moments andtheir numerical stability. Image Vision Comput 2011;29(4):251–9.

[36] Al-Rawi MS. Fast computation of pseudo Zernike moments. J Real Time ImageProcess 2010;5:3–10.

[37] Walia E, Singh C, Goyal A. On the fast computation of orthogonal Fourier–Mellin moments with improved numerical stability. J Real Time ImageProcess 2011, doi:10.1007/s11554-010-0172-7.

[38] Papakostas GA, Boutalis YS, Karras DA, Mertzios BG. Efficient computation ofZernike and pseudo-Zernike moments for pattern classification applications.Pattern Recognition Image Anal 2010;20(1):56–64.

[39] Hosny KM. Fast computation of accurate Zernike moments. J Real Time ImageProcess 2008;3:97–107.

[40] Wee CY, Paramseran R. On the computational aspects of Zernike moments.Image Vision Comput 2007;25:967–80.

[41] Canny J. A computational approach for edge detection. IEEE Trans PatternAnal Mach Intell 1986;8(6):679–98.