a novel kernelized fuzzy c-means algorithm with...

Click here to load reader

Post on 16-Mar-2020




0 download

Embed Size (px)


  • A novel kernelized fuzzy C-means algorithm withapplication in medical image segmentation

    Dao-Qiang Zhanga,b, Song-Can Chena,b,*

    aDepartment of Computer Science and Engineering, Nanjing University of Aeronautics andAstronautics, Nanjing 210016, PR ChinabNational Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences,Beijing 100080, PR China

    Received 31 March 2003; received in revised form 26 October 2003; accepted 17 January 2004

    1. Introduction

    With the increasing size and number of medicalimages, the use of computers in facilitating theirprocessing and analyses has become necessary [1].In particular, as a task of delineating anatomicalstructures and other regions of interest, imagesegmentation algorithms play a vital role in numer-ous biomedical imaging applications such as thequantification of tissue volumes, diagnosis, studyof anatomical structure, and computer-integratedsurgery [1—3]. Classically, image segmentation isdefined as the partitioning of an image into non-

    overlapping, constituent regions which are homo-geneous with respect to some characteristics suchas intensity or texture.

    Because of the advantages of magnetic reso-nance imaging (MRI) over other diagnostic imaging[2], the majority of researches in medical imagesegmentation pertains to its use for MR images, andthere are a lot of methods available for MR imagesegmentation [2—6]. Among them, fuzzy segmenta-tion methods are of considerable benefits, becausethey could retain much more information from theoriginal image than hard segmentation methods [3].In particular, the fuzzy C-means (FCM) algorithm[7], assign pixels to fuzzy clusters without labels.Unlike the hard clustering methods which forcepixels to belong exclusively to one class, FCM allowspixels to belong to multiple clusters with varying

    Artificial Intelligence in Medicine (2004) 32, 37—50


    Image segmentation;

    Fuzzy C-means; Kernel

    method; Kernel-induced

    distance; Magnetic

    resonance imaging

    Summary Image segmentation plays a crucial role in many medical imaging applica-tions. In this paper, we present a novel algorithm for fuzzy segmentation of magneticresonance imaging (MRI) data. The algorithm is realized by modifying the objectivefunction in the conventional fuzzy C-means (FCM) algorithm using a kernel-induceddistance metric and a spatial penalty on the membership functions. Firstly, the originalEuclidean distance in the FCM is replaced by a kernel-induced distance, and thus thecorresponding algorithm is derived and called as the kernelized fuzzy C-means (KFCM)algorithm, which is shown to be more robust than FCM. Then a spatial penalty is addedto the objective function in KFCM to compensate for the intensity inhomogeneities ofMR image and to allow the labeling of a pixel to be influenced by its neighbors in theimage. The penalty term acts as a regularizer and has a coefficient ranging from zero toone. Experimental results on both synthetic and real MR images show that the proposedalgorithms have better performance when noise and other artifacts are present thanthe standard algorithms.� 2004 Elsevier B.V. All rights reserved.

    *Corresponding author. Tel.: þ86-25-489-2805.E-mail addresses: [email protected], [email protected]

    (S.-C. Chen).

    0933–3657/$ — see front matter � 2004 Elsevier B.V. All rights reserved.doi:10.1016/j.artmed.2004.01.012

  • degrees of membership. Because of the additionalflexibility, FCM has been widely used in MR imagesegmentation applications recently. However,because of the spatial intensity inhomogeneityinduced by the radio-frequency coil in MR image,conventional intensity-based FCM algorithm hasproven to be problematic, even when advancedtechniques such as non-parametric, multi-channelmethods are used [2]. To deal with the inhomo-geneity problem, many algorithms have been pro-posed by adding correction steps before segmentingthe image [4,5] or by modeling the image as theproduct of the original image and a smooth varyingmultiplier field [2,6]. Recently, many researchershave incorporated spatial information into the ori-ginal FCM algorithm to better segment the images[6,8—10]. Tolias and Panas [8] proposed a fuzzyrule-based system to impose spatial continuity onFCM, and in another paper [9], they used a smallpositive constant to modify the membership of thecenter pixel in a 3 � 3 window. Pham et al. [6]modified the objective function in the FCM algo-rithm to include a multiplier field containing thefirst- and second-order information of the image.Similarly, Ahmed et al. [11] proposed an algorithmto compensate for the intensity inhomogeneity andto label a pixel by considering its immediate neigh-borhood. A rather recent approach proposed byPham [12] is to penalize the FCM objective functionto constrain the behavior of the membership func-tions, similar to methods used in the regularizationand Markov random field (MRF) theory.

    On the other hand, there is a trend in recentmachine learning work to construct a nonlinearversion of a linear algorithm using the ‘kernelmethod’, e.g. SVM [13—15], KPCA [16] and KFD[17]. And this ‘kernel method’ has also been appliedto unsupervised clustering [18—20]. However, adrawback of these kernel clustering algorithmsusing the dual representation for clustering proto-types (that is, each prototype is formulated as alinear sum of after-mapped dataset elements, andhence the parameters to be optimized are notoriginal prototypes anymore but linearly-combinedcoefficients) is that the clustering prototypes lie inhigh dimensional feature space and hence cluster-ing results lack clear and intuitive descriptions as inthe original space. In this paper, a novel kernelizedfuzzy C-means (KFCM) algorithm is proposed tocompensate for such a lack and then applied tothe MR image segmentation. It is realized by repla-cing the original Euclidean distance in the FCMalgorithm with a kernel-induced distance and add-ing a novel spatial penalty also. The penalty termacts as a regularizer and a coefficient associatedwith the term is ranging from zero to one. It is shown

    that the proposed algorithm has better segmenta-tion results on simulated or real MR images cor-rupted by noise and other artifacts than thestandard algorithms such as FCM.

    The rest of this paper is organized as follows. InSection 2, some basic concepts on the ‘kernelmethod’ are briefly introduced. In Section 3, theKFCM is derived from the original FCM based on the‘kernel method’. The KFCM with spatial constraintsis presented in Section 4 to segment the MR images.Some experimental comparisons are presented inSection 5. Finally, Section 6 gives our conclusionsand several issues for future works.

    2. The ‘kernel method’

    In the last years, a number of powerful kernel-basedlearning machines, e.g. Support Vector Machines(SVM) [13—15], Kernel Fisher Discriminant (KFD)[17] and Kernel Principal Component Analysis(KPCA) [16] were proposed and have found success-ful applications such as in pattern recognition andfunction approximation. A common philosophybehind these algorithms is based on the followingkernel (substitution) trick, that is, firstly with a(implicit) nonlinear map, from the data space tothe mapped feature space, F : X ! Fðx ! FðxÞÞ, adataset fx1; . . . ; xng � X (an input data space withlow dimension) is mapped into a potentially muchhigher dimensional feature space or inner productF, which aims at turning the original nonlinearproblem in the input space into potentially a linearone in rather high dimensional feature space so as tofaciliate problem solving as proved by Cover [21].

    A kernel in the feature space can be representedas a function K below:

    Kðx; yÞ ¼ FðxÞ;FðyÞh i (1)where FðxÞ;FðyÞh i denotes the inner productoperation.

    An interesting point about kernel function is thatthe inner product can be implicitly computed in F,without explicity using or even knowing the map-ping F. So, kernels allow computing inner productsin spaces, where one could otherwise not practi-cally perform any computations. Three commonly-used kernel functions in literature are [22]:

    (1) Gaussian radial basis function (GRBF) kernel

    Kðx; yÞ ¼ exp �jjx � yjj2


    !: (2)

    (2) Polynomial kernel

    Kðx; yÞ ¼ ð1 þ hx; yiÞd: (3)

    38 D.-Q. Zhang, S.-C. Chen

  • (3) Sigmoid kernel

    Kðx; yÞ ¼ tanhðahx; yi þ bÞ (4)

    where s, d, a, b are the adjustable parametersof the above kernel functions. For the sigmoidfunction, only a set of parameters satisfying theMercer theorem can be used to define a kernelfunction.

    3. Kernelized fuzzy C-means algorithm

    The standard FCM objective function for partition-ing a dataset fxkgNk¼1 into c clusters is given by

    Jm ¼Xci¼1


    umikjjxk � vijj2 (5)

    where fvigci¼1 are the centers or prototypes of theclusters and the array fuikg(¼U) represents a parti-tion matrix satisfying

    U 2(

    uik 2 ½0; 1������Xci¼1

    uik ¼ 1; 8k and

    0 <XNk¼1

    uik < N; 8i)


    The parameter m is a weighting exponent on eachfuzzy membership and determines the amount offuzziness of the resulting classification. In imageclustering, the most commonly used feature is thegray-level value, or intensity of image pixel. Thus,the FCM objective function is minimized when highmembership values are assigned to pixels whoseintensities are close to the centroid of its particularclass, and low membership values are assignedwhen the point is far from the centroid.

    From the discussion in Section 2, we know everyalgorithm that only uses inner products can impli-citly be executed in the feature space F. This trickcan also be used in clustering, as shown in supportvector clustering [18] and kernel (fuzzy) c-meansalgorithms [19,20]. A common ground of these algo-rithms is to represent the clustering center as alinearly-combined sum of all F(xk), i.e. the cluster-ing centers lie in feature space. In this section, weconstruct a novel kernelized FCM algorithm withobjective function as follows:

    Jm ¼Xci¼1


    umikjjFðxkÞ � FðviÞjj2 (7)

    where F is an implicit nonlinear map. Unlike[19,20], F(vi) here is not expressed as a linearly-combined sum of all F(xk) anymore, a so-called dual

    representation, but still reviewed as an mappedpoint (image) of vi in the original space, then withthe kernel substitution trick, we have

    jjFðxkÞ � FðviÞjj2 ¼ ðFðxkÞ � FðviÞÞTðFðxkÞ � FðviÞÞ¼ FðxkÞTFðxkÞ � FðviÞTFðxkÞ� FðxkÞTFðviÞ þ FðviÞTFðviÞ

    ¼ Kðxk; xkÞ þ Kðvi; viÞ � 2Kðxk; viÞ(8)

    Below we confine ourselves to the Gaussian RBFkernel, so Kðx; xÞ ¼ 1. From Eqs. (7) and (8), canbe simplified to

    Jm ¼ 2Xci¼1


    umik 1 � Kðxk; viÞð Þ (9)

    Formally, the above optimization problem comes inthe form


    Jm; subjects to Eq:ð6Þ (10)

    In a similar way to the standard FCM algorithm, theobjective function Jm can be minimized under theconstraint of U. Specifically, taking the first deri-vatives of Jm with respect to uik and vi, and zeroingthem, respectively, two necessary but not sufficientconditions for Jm to be at its local extrema will beobtained as follows:

    uik ¼ð1 � Kðxk; viÞÞ�1=ðm�1ÞPcj¼1 1 � Kðxk; vjÞ� �1=ðm�1Þ (11)

    vi ¼Pn

    k¼1umikKðxk; viÞxkPn

    k¼1umikKðxk; viÞ


    Here we use only the Gaussian RBF kernel for thesimplicity of derivation of Eqs. (11) and (12) andhence the algorithm in [23] is just a special case ofour algorithm. For other kernel functions, the cor-responding equations are a little more complex,because their derivatives are not as simple as theGaussian RBF kernel function.

    The proposed kernelized fuzzy C-means algo-rithm can be summarized in the following steps:

    Step 1: Fix c, tmax, m > 1 and e > 0 for somepositive constant.Step 2: Initialize the memberships u0ik.Step 3: For t ¼ 1; 2; . . . ; tmax, do:(a) update all prototypes vti with Eq. (12);(b) update all memberships utik with Eq. (11);(c) compute Et ¼ maxi;kjutik � ut�1ik j, if Et � e,


    A novel kernelized fuzzy C-means algorithm with application in medical image segmentation 39

  • According to Huber [24], a robust procedureshould have the following properties: (1) it shouldhave a reasonably good accuracy at the assumedmodel; (2) small deviations from the model assump-tions should impair the performance only by a smallamount; (3) larger deviations from the modelassumptions should not cause a catastrophe. It hasbeen shown that FCM is not a robust estimator fromthe robust statistical point of view [25]. In the lit-erature, there are many robust estimators [26,25]. InAppendix A, we show that the above mentionedKFCM using the kernel in Eq. (2) is a robust estimator.Here we can give an intuitive explanation for therobustness of KFCM. By Eq. (12), the data point xk isendowed with an additional weight Kðxk; viÞ, whichmeasures the similarity between xk and vi. When xk isan outlier, i.e. xk is far from the other data points,Kðxk; viÞ will be very small, thus the weighted sum ofdata points shall be suppressed and hence morerobust. Note that when s in Eq. (2) tends to zero,Kðxk; viÞ turns into an impulse function with the valueof 1 only at xk ¼ vi and 0 elsewhere. In this extremecase, each given data point will have no longerneighborhood but become a ‘‘soliton’’ and the dis-tance between any two points in the feature spaceapproaches a common value of 1, leading to diffi-culties in clustering them. On the other hand, whensigma tends to infinity, thedistance between any twopoints in the feature space will approaches zero andthus all data will cluster together, leading to adifficulty in separating them. In short, we shouldavoid such extreme cases in practice and choosean appropriate value for sigma neither too largenor too small by trial-and-error technique or experi-ence or prior knowledge. In this paper,we choose thesigma by the trial-and-error technique.

    4. Spatially constrained KFCM for imagesegmentation

    Although KFCM can be directly applied to imagesegmentation like FCM, it would be helpful to

    consider some spatial constraints on the objectivefunction.

    We propose a modification to Eq. (7) by intro-ducing a penalty term containing spatial neighbor-hood information. As mentioned before, thispenalty term acts as a regularizer and biases thesolution toward piecewise-homogeneous labeling.

    Such regularization is helpful in segmenting imagescorrupted by noise. The modified objective functionis given by

    Jm ¼Xci¼1


    umik 1 � Kðxk; viÞð Þ

    þ aNR




    1 � uirð Þm (13)

    where Nk stands for the set of neighbors that exist ina window around xk (do not include xk itself) and NRis the cardinality of Nk. The parameter a controlsthe effect of the penalty term and lies between zeroand one inclusive. The relative importance of theregularizing term is inversely proportional to thesignal-to-noise ratio (SNR) of the MRI data. Low SNRwould require a higher value of a, and vice versa.The new penalty term is minimized when member-ship value for a particular class of a pixel is large andthe membership values for that cluster should bealso large at neighboring pixels, and vice versa. Inother words, a pixel’s membership value is corre-lated to those of the other pixels at its neighbor-hood. It is interesting to note that Eq. (13) isreminiscent of Rajesh Dave’s approach to robustfuzzy C-means [25]. However, there are at least twodifferences between Eq. (13) and Dave’s robustfuzzy C-means (see Eq. (12) in [25]). Firstly, themembership uik in Eq. (13) is still constrained byEq. (6), while in Dave’s robust fuzzy C-means, thereare no constraints on the memberships other thanthe requirement that they should be in [0, 1].Another difference is that in Eq. (13) we emphasizemore the effect of the neighbors on the member-ships of a pixel, while in Dave’s robust fuzzy C-means no such neighbors exists.

    An iterative algorithm for minimizing Eq. (13)can be derived by evaluating the centroids andmembership functions that satisfy a zero gradientcondition like the KFCM. A necessary condition onuik for Eq. (13) to be at a local minimum or an saddlepoint is

    Because the penalty function does not depend onvi, the necessary conditions under which Eq. (13)attains its minima is identical to that of KFCM, i.e.Eq. (12). Alternating iterations between the twonecessary conditions will result in convergence ofthe algorithm to the minima of Eq. (14), which isalmost identical to the KFCM, except in Step 3(b),

    uik ¼1 � Kðxk; viÞð Þ þ a

    Pr2Nkð1 � uirÞ


    � ��1=ðm�1Þ

    Pcj¼1 1 � Kðxk; vjÞ

    � þ a

    Pr2Nkð1 � ujrÞ


    � ��1=ðm�1Þ (14)

    40 D.-Q. Zhang, S.-C. Chen

  • using Eq. (14) instead of Eq. (11) to update thememberships.

    In the above discussion, we confine ourselves toonly Gaussian RBF kernel. In fact, in terms of Cha-pelle et al. [27], we can also use more general RBFkernels as below:

    Kðx; yÞ ¼ expð�rdðx; yÞÞ (15)d(x, y) can be chosen to be the following generalform

    dðx; yÞ ¼X


    jxai� ya

    ijb 0 < b � 2ð Þ (16)

    Obviously, the generalized RBF kernels satisfyKðx; xÞ ¼ 1, and when they are used in Eq. (13),the iterative equations are similar as Eqs. (12) and(14). Appendix B gives the detailed derivations ofthe two necessary conditions (Eqs. (12) and (14)) forEq. (13) to be at a local minimum or a saddle point.

    5. Results and discussions

    In this section, we describe some experimentalresults to compare the segmentation performance

    Figure 1 Comparison of segmentation results on a synthetic image corrupted by 5% Gaussian noise. (a) The originalimage, (b) using FCM, (c) using KFCM, (d) using SFCM, and (e) using SKFCM.

    A novel kernelized fuzzy C-means algorithm with application in medical image segmentation 41

  • of the following algorithms, i.e. FCM, FCM withspatial constraints [12] (SFCM), KFCM, and KFCMwith spatial constraints (SKFCM). We test the fourmethods on three different datasets. One is a simplesynthetic image, another is the classical simulatedbrain database of McGill University [28], and the lastone is real MR slices. Only the Gaussian RBF kernel isused for KFCM and SKFCM, and because of the largedynamic range of the parameter value in SFCM, weuse the value recommended in [12].

    The synthetic image is shown in Fig. 1a. It con-tains a two-class pattern corrupted by 5% Gaussiannoise. The intensity values of the two classes are 0

    and 90, respectively, and the image size is64 � 64 pixels. Fig. 1b—e show the segmentationresults of FCM, KFCM, SFCM and SKFCM, respec-tively. Here we set the parameter s ¼ 150 (GaussianRBF kernel width), a ¼ 0:7, m ¼ 2, NR ¼ 8 (a 3 � 3window centered around each pixel, except thecentral pixel itself). These values will be used inthe rest of this paper if no specific value is explicitlystated. As shown in Fig. 1b and c, without spatialconstraints, neither FCM nor KFCM can separate thetwo classes, while SFCM nearly and SKFCM comple-tely succeed in correcting and classifying the dataas shown in Fig. 1d and e. Note that because of the

    Figure 2 Comparison of segmentation results on a synthetic image corrupted by 5% Gaussian noise and sinusoidalintensity inhomogeneity. (a) The original image, (b) using FCM, (c) using KFCM, (d) using SFCM, and (e) using SKFCM.

    42 D.-Q. Zhang, S.-C. Chen

  • injection of the kernel, KFCM need more executiontime than FCM, and correspondingly, SKFCM isslower than SFCM. Typically, the algorithms withoutkernel are several times faster than those withinjection of kernels.

    Fig. 2a is the synthetic test image corrupted by5% Gaussian noise and intensity inhomogeneity whichis simulated by a sinusoid function. Fig. 2b—eshow the results of FCM, KFCM, SFCM, and SKFCM,respectively. As in Fig. 1, SKFCM acquires the bestsegmentation performance. In order to obtain aquantitative comparison, Table 1 gives the segmen-

    tation accuracy (SA) of four methods in Figs. 1a and2a, where the segmentation accuracy is defined asthe sum of the total number of pixels divided by thesum of number of correctly classified pixels [11].

    Table 1 Segmentation accuracy (SA %) of fourmethods on synthetic image


    Fig. 1a 96.02 96.51 99.34 100Fig. 2a 94.41 91.11 98.41 99.88

    Figure 3 Comparison of segmentation results on a simulated brain MR image corrupted by 3% noise. (a) OriginalT1-weighted image, (b) using FCM, (c) using KFCM, (d) using SFCM, and (e) using SKFCM.

    A novel kernelized fuzzy C-means algorithm with application in medical image segmentation 43

  • Figs. 3 and 4 present a comparison of segmenta-tion results between FCM, KFCM, SFCM and SKFCM,when applied on T1-weighted MR phantom [28].The advantages for using digital phantoms ratherthan real image data for validating segmentationmethods include prior knowledge of the true tissuetypes and control over image parameters such asmodality, slice thickness, noise and intensity inho-mogeneities. Here in our experiments, we use ahigh-resolution T1-weighted phantom with slicethickness of 1mm, 3% noise and no intensity inho-mogeneities. Two slices in the axial plane with thesequence of 91 and 121 are shown in Figs. 3a and4a, respectively. The segmentation results on two

    slices using the four methods with eight classes areshown in Figs. 3b—e and 4b—e, respectively.Table 2 gives the quantitative comparison scorescorresponding to Fig. 3a using four methods witheight classes. Note that SFCM in fact finds threeclasses, but because Class 5 is too small comparedwith the other classes, its score is rounded to 0.00in Table 2. The comparison scores (also named asdegree of equality in [29]) for each algorithm andfor each class are calculated by the followingequation [30]:

    sij ¼Aij \ ArefjAij [ Arefj


    Figure 4 Another simulated brain MR example. (a) Original T1-weighted image corrupted by 3% noise, (b) using FCM,(c) using KFCM, (d) using SFCM, and (e) using SKFCM.

    44 D.-Q. Zhang, S.-C. Chen

  • where Aij represents the set of pixels belonging tothe jth class found by the ith algorithm and Arefjrepresent the set of pixels belonging to the jth classin the reference segmented image. Here we choose

    a ¼ 0:1, because the noise is relatively small. In thiscase, FCM and SFCM cannot correctly classify theimages, while KFCM and SKFCM acquire satisfyingsegmentation results.

    Table 2 Comparison scores for Fig. 3a using four methods with eight classes

    Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7 Class 8

    FCM 0.46 0.75 0.51 0.39 0.47 0.35 0.65 0.17KFCM 0.63 0.86 0.73 0.66 0.75 0.60 0.88 0.26SFCM 0 0 0 0 0.00 0 0.59 0.31SKFCM 0.66 0.86 0.72 0.72 0.69 0.47 0.88 0.29

    Figure 5 Brain MRI example. (a) Original MR image corrupted by 5% Gaussian noise, (b) FCM result, (c) KFCM result,(d) SFCM result, and (e) SKFCM result.

    A novel kernelized fuzzy C-means algorithm with application in medical image segmentation 45

  • Figs. 5 and 6 present a comparison of segmenta-tion results among FCM, KFCM, SFCM and SKFCMwhen applied on real MR slices corrupted by 5%Gaussian noise. These T1-weighted MR images areobtained using a General Electric Signa 1.5-Teslaclinical MR imager with in-plane resolution of0.94 mm2 [11]. Figs. 5a and 6a show the artificiallycorrupted images, and Figs. 5b—e and 6b—e are theresults using FCM, KFCM, SFCM, and SKFCM withthree classes, respectively. Table 3 gives the cor-responding comparison scores for Figs. 5a and 6ausing four methods with three classes. From the

    images and Table 3, we can see that withoutspatial constraints, both FCM and KFCM areaffected by the noise badly, while SFCM partiallyand SKFCM nearly completely eliminate the effectof noise.

    Finally, Fig. 7 shows a comparison of segmenta-tion results of the four methods on a T2-weightedMR image corrupted by slight intensity inhomogene-ities. Fig. 7a is the original image, and Fig. 7b—e arethe results using FCM, KFCM, SFCM and SKFCM,respectively. Note that SKFCM is much less frag-mented than other algorithms and the incorporation

    Figure 6 Another Brain MRI example. (a) Original MR image corrupted by 5% Gaussian noise, (b) FCM result, (c) KFCMresult, (d) SFCM result, and (e) SKFCM result.

    46 D.-Q. Zhang, S.-C. Chen

  • Table 3 Comparison scores for Figs. 5a and 6a using four methods with three classes

    Fig. 5a Fig. 6a

    Class 1 Class 2 Class 3 Class 1 Class 2 Class 3

    FCM 0.88 0.32 0.93 0.87 0.29 0.92KFCM 0.79 0.19 0.91 0.79 0.17 0.91SFCM 0.99 0.75 0.94 0.98 0.62 0.92SKFCM 1.00 0.78 0.95 0.99 0.68 0.94

    Figure 7 A T1-weighted Brain MRI example. (a) Original MR image corrupted by slight intensity inhomogeneities, (b)FCM result, (c) KFCM result, (d) SFCM result, and (e) SKFCM result.

    A novel kernelized fuzzy C-means algorithm with application in medical image segmentation 47

  • of spatial constraints into the classification hassomewhat the disadvantage of blurring of some finedetails, but SKFCM had better result than SFCM, asshown in Fig. 7d and e.

    6. Conclusion

    In this paper, a novel kernelized fuzzy C-meansalgorithm is proposed and applied to MR imagesegmentation. KFCM adopts a new kernel-inducedmetric in the data space to replace the originalEuclidean norm metric in FCM and the clusteredprototypes still lie in the data space so that theclustering results can directly be reformulated andinterpreted in the original space. It has been provedthat KFCM is a robust clustering approach in Appen-dix A.

    Furthermore, we added a spatial constraint onthe objective function of KFCM to effectively seg-ment MR images corrupted by noise. Although thespatial constraint used in KFCM is similar to thatused in [12], ours is simpler and computationallyinexpensive. What’s more, since the distanceinduced by a Gaussian RBF is constrained betweenzero and one, exactly consistent with the range ofthe membership value; we only need to adjust theregularizer coefficient between zero and one tocontrol the relative importance of the regularizingterm.

    The results presented in this paper are prelimin-ary and further clinical evaluation is required.Because nearly all modified FCM algorithms forimage segmentation are based on adding some typeof penalty terms to the original FCM objectivefunction, the KFCM can be applied in a reasonablystraightforward manner to improve the perfor-mance of these algorithms.


    The authors are grateful to the anonymousreviewers for their comments and suggestions whichgreatly improve the presentation of this paper. Thiswork was supported in part by the National ScienceFoundations of China and of Jiangsu under GrantNos. 60271017 and BK2002092, respectively.

    Appendix A

    Proof that KFCM using the kernel in Eq. (2) is arobust estimator.

    Proof. According to Huber [24], there are manyrobust estimators, e.g. M-, L-, and R-estimator.

    In this section, we are only interested in M-estima-tor and follow the process of proof in [23]. Letfx1; x2; . . . ; xng be an observed dataset and y anunknown parameter to be estimated. A M-estimatorfor the location estimation can be generated byminimizing the following objective function:

    JðyÞ ¼Xni¼1

    rðxi � yÞ (A.1)

    where r is a function and taken as 1 � Kðx; yÞ in thisproof and only dependent on (xi � y). Then the M-estimator is generated by solving the equation

    J0ðyÞ ¼Xni¼1

    fðxi � yÞ ¼ 0 (A.2)

    where fðx � yÞ ¼ ð@[email protected]Þrðx � yÞ. If we takerðx � yÞ ¼ ðx � yÞ2 and rðx � yÞ ¼ x � yj j, respec-tively, their M-estimators are the correspondingmean and median of the sample dataset.Eq. (A.2)can be solved by rewriting as


    wiðxi � yÞ ¼ 0 (A.3)

    where wi ¼ fðxi � yÞ=ðxi � yÞ, called the weightedfunction. This gives the estimator as below:

    ŷ ¼Xni¼1



    It is the weighted mean of the sample dataset. Notethe result by solving Eq. (A.4) may not be a closedform for ŷ. We can apply the fixed-point iteration oralternate optimization to obtain a solution ofEq. (A.4) iteratively.

    The influence function (IF) can help us to assessthe relative influence of individual observationstoward the value of an estimate. The M-estimatorhas been shown that its influence is proportional toits f function [24]. Now we have IF of an M-esti-mator with

    IFðx; F ; yÞ ¼ fðx � yÞRf0ðx � yÞ dFXðxÞ


    where FX(x) stands for a distribution function of X. Ifan IF of an estimator is unbounded, an outlier mightcause trouble. There are many measures of robust-ness derived from IF, one of which is the gross errorsensitivity (GES) defined below:

    g� ¼ supx

    IFðx; F ; yÞj j (A.6)

    This quantity can interpret the worst approximateinfluence that the addition of an infinitesimal point

    48 D.-Q. Zhang, S.-C. Chen

  • mass can have on the value of the associated esti-mator. For our case, the resulting solutions fromEq. (9) is an M-estimator if rðx � yÞ ¼ 1 � Kðx; yÞand K(x, y) is taken as Eq. (2). For simplicity, we onlyconsider Eq. (2) with single variable, i.e.

    Kðx; yÞ ¼ exp �ðx � yÞ2



    Thus we have

    fðx � yÞ ¼ exp �ðx � yÞ2



    �x � yð Þ (A.8)

    By applying the L’Hospital’s rule, the followinglimitations for Eq. (A.8) can be derived:


    fðx � yÞ ¼ 0 (A.9)

    And at the same time, we can get their boundedmaximum and minimum by zeroing the derivative ofthe function in Eq. (A.8).

    According to the above, the function in Eq. (A.8)is bounded and continuous and thus the correspond-ing IF is also bounded and continuous, resulting in afinite gross error sensitivity. Hence our estimatorbased on Eq. (2) is robust.

    Appendix B

    Derivations of Eqs. (12) and (14).

    Proof. The constraint optimization problem inEq. (10), where Jm is defined in Eq. (13), can besolved by using the Lagrange multiplier method.Now define new objective function as follows:

    Lm ¼Xci¼1


    umik 1 � Kðxk; viÞð Þ

    þ aNR




    1 � uirð Þmþl 1 �Xci¼1




    where the kernel is define in Eq. (2), Nk the set ofneighbors that exist in a window around xk, but donot include xk itself, and NR is the cardinality of Nk.Taking the derivative of Lm with respect to uik andsetting the result to zero, we have, for m > 1(@[email protected]

    ¼ mum�1ik

    "1 � Kðxk; viÞð Þ

    þ aNR


    1 � uirð Þm#� l)


    ¼ 0 (B.2)

    Solving for u�ik, we have

    Solving l from Eq. (B.4) and substituting intoEq. (B.3), we can get the zero-gradient conditionfor the membership as shown in Eq. (14).

    Similarly, Substituting Eq. (2) into Eq. (B.1) andzeroing the derivative of Lm with respect to vi, wehave

    @[email protected]



    umik �Kðxk; viÞð Þ2ðxk � viÞ( )


    ¼ 0


    According to Eq. (B.5), Eq. (12) can be derived.


    [1] Pham DL, Xu CY, Prince JL. A survey of current methods inmedical image segmentation. Ann. Rev. Biomed. Eng.2000;2:315—37 [Technical report version, JHU/ECE 99—01, Johns Hopkins University].

    [2] Wells WM, Grimson WEL, Kikinis R, Arrdrige SR. Adaptivesegmentation of MRI data. IEEE Trans Med Imaging 1996;15:429—42.

    [3] Bezdek JC, Hall LO, Clarke LP. Review of MR imagesegmentation techniques using pattern recognition. MedPhys 1993;20:1033—48.

    [4] Dawant BM, Zijidenbos AP, Margolin RA. Correction ofintensity variations in MR image for computer-aided tissueclassification. IEEE Trans Med Imaging 1993;12:770—81.

    [5] Johnson B, Atkins MS, Mackiewich B, Andson M. Segmenta-tion of multiple sclerosis lesions in intensity correctedmultispectral MRI. IEEE Trans Med Imaging 1996;15:154—69.

    u�ik ¼l

    m 1 � Kðxk; viÞð Þ þ aP

    r2Nk 1 � uirð Þm=NR

    � �[email protected]





    j¼1ujk ¼ 1; 8k



    m 1 � Kðxk; vjÞ�

    þ aP

    r2Nk 1 � ujr� m


    � �

    [email protected]



    ¼ 1 (B.4)

    A novel kernelized fuzzy C-means algorithm with application in medical image segmentation 49

  • [6] Pham DL, Prince JL. An adaptive fuzzy C-means algorithmfor image segmentation in the presence of intensityinhomogeneities. Pattern Recognit Lett 1999;20(1):57—68.

    [7] Bezdek JC. Pattern recognition with fuzzy objectivefunction algorithms. New York: Plenum Press; 1981.

    [8] Tolias YA, Panas SM. On applying spatial constraints infuzzy image clustering using a fuzzy rule-based system.IEEE Signal Proc Lett 1998;5(10):245—7.

    [9] Tolias YA, Panas SM. Image segmentation by a fuzzyclustering algorithm using adaptive spatially constrainedmembership fucntions. IEEE Trans Syst, Man, Cybernet PartA 1998;28(3):359—69.

    [10] Liew AWC, Leung SH, Lau WH. Fuzzy image clusteringincorporating spatial continuity. IEE Proc Vis Image SignalProc 2000;147(2):185—92.

    [11] Ahmed MN, Yamany SM, Mohamed N, Farag AA, Moriarty T.A modified fuzzy C-means algorithm for bias field estima-tion and segmentation of MRI data. IEEE Trans Med Imaging2002;21(3):193—9.

    [12] Pham DL. Fuzzy clustering with spatial constraints. In:Proceedings of the IEEE International Conference on ImageProcessing, New York, USA, August, 2002.

    [13] Cristianini N, Taylor JS. An introduction to SVMs and otherkernel-based learning methods. Cambridge UniversityPress; 2000.

    [14] Vapnik VN. Statistical learning theory. New York: Wiley;1998.

    [15] Scholkopf B. Support vector learning. R. OldenbourgVerlay; 1997.

    [16] Scholkopf B, Smola AJ, Muller KR. Nonlinear componentanalysis as a kernel eigenvalue problem. Neural Comput1998;10(5):1299—319.

    [17] Roth V, Steinhage. Nonlinear discriminant analysis usingkernel functions. In: Advances in neural informationprocessing systems, vol. 12. 2000, pp. 568—74.

    [18] Hur AB, Horn D, Siegelmann HT, Vapnik V. Support vectorclustering. J Mach Learn Res 2001;2:125—37.

    [19] Girolami M. Mercer kernel-based clustering in featurespace. IEEE Trans Neural Networks 2002;13(3):780—4.

    [20] Zhang DQ, Chen SC. Fuzzy clustering using kernel methods.In: Proceedings of the International Conference on Controland Automation, Xiamen, China, June, 2002.

    [21] Cover TM. Geometrical and statistical properties of systemsof linear inequalities in pattern recognition. IEEE TransElectron Comput 1965;14:326—34.

    [22] Muller KR, Mika S et al. An introduction to kernel-basedlearning algorithms. IEEE Trans Neural Networks 2001;12(2):181—202.

    [23] Wu KL, Yang MS. Alternative c-means clustering algorithms.Pattern Recognit 2002;35:2267—78.

    [24] Huber PJ. Robust statistics. New York: Wiley; 1981.[25] Dave RN, Krishnapuram R. Robust clustering method: a

    unified view. IEEE Trans Fuzzy Syst 1997;5(2):270—93.[26] Leski J. Towards a robust fuzzy clustering. Fuzzy sets and

    systems. Corrected proof, available online, 8 February2003, in press.

    [27] Chapelle O, Haffner P, Vapnik V. Syms for histogram-basedimage classification. IEEE Trans Neural Networks 1999;10(5):1055—65.

    [28] Kwan RKS, Evans AC, Pike GB. An extensible MRI simulatorfor post-processing evaluation. Visualization in biomedicalcomputing (VBC’96). Lecture notes in computer science,vol. 1131. Springer-Verlag; 1996. pp. 135—40.

    [29] Lin C.T, Lee C.S.G. Real-time supervised structure/para-meter learning for fuzzy neural network. In: Proceedings ofthe 1992 IEEE International Conference on Fuzzy Systems,San Diego, CA, pp. 1283—90.

    [30] Masulli F, Schenone A. A fuzzy clustering based segmenta-tion system as support to diagnosis in medical imaging.Artif Intell Med 1999;16(2):129—47.

    50 D.-Q. Zhang, S.-C. Chen

    A novel kernelized fuzzy C-means algorithm with application in medical image segmentationIntroductionThe 'kernel method'Kernelized fuzzy C-means algorithmSpatially constrained KFCM for image segmentationResults and discussionsConclusionAcknowledgementsAppendix AAppendix BReferences