[ieee 2009 ieee symposium on computational intelligence for image processing (ciip) - nashville, tn,...

8
2D Ultrasound Image Segmentation Using Graph Cuts and Local Image Features Mehrnaz Zouqi and Jagath Samarabandu Abstract— Ultrasound imaging is a popular imaging modality due to a number of favorable properties of this modality. However, the poor quality of ultrasound images makes them a bad choice for segmentation algorithms. In this paper, we present a semi-automatic algorithm for organ segmentation in ultrasound images, by posing it as an energy minimiza- tion problem via appropriate definition of energy terms. We use graph-cuts as our optimization algorithm and employ a fuzzy inference system (FIS) to further refine the optimization process. This refinement is achieved by using the FIS to incorporate domain knowledge in order to provide additional constraints. We show that by integrating domain knowledge via FIS, the accuracy is improved significantly so that further manual refinement of object boundary is often unnecessary. Our algorithm was applied to detect prostate and carotid artery boundaries in clinical ultrasound images and shows the success of the proposed approach. I. I NTRODUCTION The segmentation of anatomic structures is an essential first stage of most medical image analysis tasks such as registration, labeling, and motion tracking [1]. Performing this segmentation task manually is too time consuming and tedious. The results are heavily dependent on the experience of the observer and consequently variable between observers (interobservability) and even within an observer when performing the same job at different times (intraobservability) [2]. Hence automating the boundary detection process with minimal manual involvement to improve the efficiency is of paramount interest. Among different image modalities, ultrasound imaging is broadly used in clinical applications due to its simplicity, flexibility, portability, cost effectiveness, harmlessness and several other advantages. This has led to increased use of ultrasound in not only its traditional area of diagnosis, but also in emerging areas such as image-guided intervention and therapy [3]. However, there are characteristic artefacts such as poor signal-to-noise ratio (SNR) and the speckle noise that has nonzero correlation over relatively large distances, which complicates the segmentation task [4]. These artefacts can result in missing boundaries and low contrast between areas of interest. In this paper, we present a semi-automatic algorithm to segment 2D ultrasound images as accurately as possible and with the least user interaction. We have used the interactive graph-cuts framework of Boykov and Jolly [5] for all the favorable properties it Mehrnaz Zouqi and Jagath Samarabandu are with the Department of Electrical and Computer Engineering, The University of western Ontario, London, Ontario, Canada (email: {mzouqi, jagath}@uwo.ca). has. However, this framework, tend not to perform well in boundary detection on ultrasound images due to the poor quality of this image modality. The goal of our research is to make graph-cuts framework more suitable for performing organ segmentation on ultrasound images. To achieve this goal, we performed three main tasks. Firstly, we have a preprocessing step to enhance the ultrasound image quality for better performance of graph-cuts. Secondly, our initial- ization step is different than usual graph-cuts initialization [5], [6], [7]. Finally, we have automated the editing part of the framework to require less user interaction. The second section of the paper is a review about the graph-cuts algorithm. In the third section, the details of our algorithm are described. In the fourth section, the evaluation of the proposed algorithm is discussed and the results are demonstrated visually and quantitatively. We conclude in the fifth section by summarizing the strengths of the proposed algorithm with a discussion of situations in which the algo- rithm may fail and future directions. II. GRAPH- CUTS ALGORITHM As Fig.1 shows, an undirected graph G = ν, ε is defined as a set of nodes (vertices ν ) and a set of undirected edges (ε) that connect these nodes. Each edge in the graph is assigned a non-negative cost. There are also two special nodes called the source and the sink terminals. A cut on this graph, partitions the nodes into two disjoint sets such that terminals get separated. The cost of a cut is defined as the sum of the costs of all severed edges. The minimum cut is the cut with the smallest cost. There are numerous algorithms for finding this minimum cut. In this paper, we have used max-flow/min-cut algorithm [7] for the implementation of our segmentation method. According to the max-flow/min-cut theorem, the minimum cut can be efficiently computed by finding the maximum flow between two terminals. The max-flow algorithm gradually increases the flow sent from the source S to the sink T along the edges in G given their costs (capacities). Upon termination, the maximum flow saturates the graph. The saturated edges correspond to the minimum cost cut on G, gives us an optimal cut [6]. For the purpose of image segmentation, the nodes of the graph represent pixels (or voxels). Source and sink terminals represent the object and the background respectively. Cost of horizontal edges, or n-links, represent discontinuity penalties and cost of vertical edges, or t-links, represent regional penalties. A minimum cost cut on this graph generates a 978-1-4244-2760-4/09/$25.00 ©2009 IEEE

Upload: jagath

Post on 17-Mar-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: [IEEE 2009 IEEE Symposium on Computational Intelligence for Image Processing (CIIP) - Nashville, TN, USA (2009.03.30-2009.04.2)] 2009 IEEE Symposium on Computational Intelligence for

2D Ultrasound Image Segmentation Using Graph Cuts and LocalImage Features

Mehrnaz Zouqi and Jagath Samarabandu

Abstract— Ultrasound imaging is a popular imaging modalitydue to a number of favorable properties of this modality.However, the poor quality of ultrasound images makes thema bad choice for segmentation algorithms. In this paper, wepresent a semi-automatic algorithm for organ segmentationin ultrasound images, by posing it as an energy minimiza-tion problem via appropriate definition of energy terms. Weuse graph-cuts as our optimization algorithm and employ afuzzy inference system (FIS) to further refine the optimizationprocess. This refinement is achieved by using the FIS toincorporate domain knowledge in order to provide additionalconstraints. We show that by integrating domain knowledgevia FIS, the accuracy is improved significantly so that furthermanual refinement of object boundary is often unnecessary.Our algorithm was applied to detect prostate and carotid arteryboundaries in clinical ultrasound images and shows the successof the proposed approach.

I. INTRODUCTION

The segmentation of anatomic structures is an essentialfirst stage of most medical image analysis tasks such asregistration, labeling, and motion tracking [1]. Performingthis segmentation task manually is too time consumingand tedious. The results are heavily dependent on theexperience of the observer and consequently variablebetween observers (interobservability) and even within anobserver when performing the same job at different times(intraobservability) [2]. Hence automating the boundarydetection process with minimal manual involvement toimprove the efficiency is of paramount interest.

Among different image modalities, ultrasound imaging isbroadly used in clinical applications due to its simplicity,flexibility, portability, cost effectiveness, harmlessness andseveral other advantages. This has led to increased use ofultrasound in not only its traditional area of diagnosis, butalso in emerging areas such as image-guided intervention andtherapy [3]. However, there are characteristic artefacts suchas poor signal-to-noise ratio (SNR) and the speckle noisethat has nonzero correlation over relatively large distances,which complicates the segmentation task [4]. These artefactscan result in missing boundaries and low contrast betweenareas of interest. In this paper, we present a semi-automaticalgorithm to segment 2D ultrasound images as accurately aspossible and with the least user interaction.We have used the interactive graph-cuts framework ofBoykov and Jolly [5] for all the favorable properties it

Mehrnaz Zouqi and Jagath Samarabandu are with the Department ofElectrical and Computer Engineering, The University of western Ontario,London, Ontario, Canada (email: {mzouqi, jagath}@uwo.ca).

has. However, this framework, tend not to perform well inboundary detection on ultrasound images due to the poorquality of this image modality. The goal of our research isto make graph-cuts framework more suitable for performingorgan segmentation on ultrasound images. To achieve thisgoal, we performed three main tasks. Firstly, we have apreprocessing step to enhance the ultrasound image qualityfor better performance of graph-cuts. Secondly, our initial-ization step is different than usual graph-cuts initialization[5], [6], [7]. Finally, we have automated the editing part ofthe framework to require less user interaction.The second section of the paper is a review about thegraph-cuts algorithm. In the third section, the details of ouralgorithm are described. In the fourth section, the evaluationof the proposed algorithm is discussed and the results aredemonstrated visually and quantitatively. We conclude in thefifth section by summarizing the strengths of the proposedalgorithm with a discussion of situations in which the algo-rithm may fail and future directions.

II. GRAPH-CUTS ALGORITHM

As Fig.1 shows, an undirected graph G = 〈ν, ε〉 is definedas a set of nodes (vertices ν) and a set of undirected edges(ε) that connect these nodes. Each edge in the graph isassigned a non-negative cost. There are also two specialnodes called the source and the sink terminals. A cuton this graph, partitions the nodes into two disjoint setssuch that terminals get separated. The cost of a cut isdefined as the sum of the costs of all severed edges. Theminimum cut is the cut with the smallest cost. There arenumerous algorithms for finding this minimum cut. In thispaper, we have used max-flow/min-cut algorithm [7] for theimplementation of our segmentation method.According to the max-flow/min-cut theorem, the minimumcut can be efficiently computed by finding the maximumflow between two terminals. The max-flow algorithmgradually increases the flow sent from the source S to thesink T along the edges in G given their costs (capacities).Upon termination, the maximum flow saturates the graph.The saturated edges correspond to the minimum cost cut onG, gives us an optimal cut [6].

For the purpose of image segmentation, the nodes of thegraph represent pixels (or voxels). Source and sink terminalsrepresent the object and the background respectively. Cost ofhorizontal edges, or n-links, represent discontinuity penaltiesand cost of vertical edges, or t-links, represent regionalpenalties. A minimum cost cut on this graph generates a

978-1-4244-2760-4/09/$25.00 ©2009 IEEE

Page 2: [IEEE 2009 IEEE Symposium on Computational Intelligence for Image Processing (CIIP) - Nashville, TN, USA (2009.03.30-2009.04.2)] 2009 IEEE Symposium on Computational Intelligence for

segmentation that is optimal in terms of properties that arebuilt into the edge costs [5]. Segmentation problem can beinterpreted as the labeling problem because we assign a labelfrom the set of L = {object, background} to each pixel.Labeling problem can be formulated in terms of energyminimization, in which we look for the labeling A of thepixels that minimizes the energy function E(A) which iscomposed of regional and boundary terms R(A), B(A):

E(A) = λB(A) + R(A) (1)

where the following equations describe the regional andboundary terms for a M × N image. Ap and Aq are thelabels of pixels p and q:

R(A) =MN∑p=1

Rp(Ap) (2)

B(A) =∑

{p,q}∈N

B{p,q}δ(Ap, Aq) (3)

and

δ(Ap, Aq) =

{1 if Ap 6= Aq

0 otherwise.(4)

The coefficient λ ≥ 0 determines the relative importanceof the region properties versus the boundary properties. Forexample if λ is very small, meaning that only regional termis important and the label of each pixel is independent fromthe others.Rp(Ap) measures how much pixel Ap likes its assignedlabel Ap. So, R(A) shows the total penalties of labeling Afor individual pixels. For example R(A) can be computedbased on how the intensity of each pixel fits into a knownintensity histogram of the object and background. Intensitiesof marked pixels by user can be used to get histograms forobject and background intensity distributions [5].For boundary term B(A), coefficient Bp,q ≥ 0 determinesthe penalty for a discontinuity between pixels p and q. Forexample in image segmentation, as we want the boundarylie on the intensity edges in the image, we should designa function for Bp,q which is large when p and q havesimilar intensities and it is close to zero when they are verydifferent [5].

Fig. 1. A simple 2-D graph for a 3× 3 image and its minimum cut.

III. SEGMENTATION ALGORITHM

Our segmentation algorithm can be described in fourdifferent stages: preprocessing, initialization, segmentationand postprocessing. The overall operation of the proposedsegmentation system is shown in Fig. 2.

Fig. 2. Overall operation of the proposed image segmentation system

A. PreprocessingAlthough low-pass filtering is a common technique to

reduce speckle noise, this technique is not applicable forultrasound images since low-pass filters often blur the edgesand make the image unsuitable for segmentation [4]. We haveused “stick filter” to reduce speckle noise while improvingedges [8]. Below, we explain how stick filter works.Consider a small square N ×N neighborhood in the image.In this neighborhood, there are 2N -2 short lines (no interpo-lation is carried out) that pass through the central pixel, witheach line having exactly N pixels [4]. Fig. 3 illustrates thefour possible line segments for a 3 × 3 neighborhood. Foreach of these 2N -2 segments, the sum of pixel values alongthe line is calculated. The segment with the largest sum isbeing selected and the stick image value at the center pixelis the maximum of the 2N -2 ray sums. This step is repeatedfor all the pixels in the image [4].

Page 3: [IEEE 2009 IEEE Symposium on Computational Intelligence for Image Processing (CIIP) - Nashville, TN, USA (2009.03.30-2009.04.2)] 2009 IEEE Symposium on Computational Intelligence for

Fig. 3. Four possible orientations of a stick with the length of three

Fig. 4 shows the results of applying stick filter to a prostateimage. As it can be seen clearly, the contrast (image gradient)at the edges is increased, while noise is decreased.

Fig. 4. The result of applying 11-pixel long sticks on a prostate image

As the sticks length increases, the speckles in the image areblurred while the prostate edges are more clearly emphasized.This is because the effect of speckle is decorrelated at largedistances and averages out as the stick length increases.However, extending the stick length beyond the actual edgesdegrades the SNR by including more noise pixels along thestick templates [4], it also increases the computation time. So,for the sake of computation efficiency, we selected N = 11for the stick length.

B. Initialization

In traditional initialization method of graph-cuts, usermarks some object pixels as red and some backgroundpixels as blue and then expected intensities of object andbackground will be calculated from these pixels which willbe used in the formulation of the regional term [5], [6], [7]. Incase of ultrasound images, this initialization makes the finalresult sensitive to the initial user selection. To reduce thissensitivity, we have proposed a new initialization method.Basic idea for our initialization method was adapted fromLadak et al. [9]. The initialization method requires the userto select a few points near the boundary (For our experiments,we select four points but, the segmentation result is highlyindependent from the number of user points or their locationon the boundary). Then cubic interpolation functions wereconsidered between these points to generate the intermediatepoints. Two new curves will be generated inside and outsidethis initial curve by using the user points and move themperpendicular to the initial curve 20 pixels inside and 50pixels outside (values depend on the image size and canbe adjusted as desired) and the same cubic interpolationis used for these points to create the curves. These curvesare shown in Fig. 5. As long as these curves are generatedin correct regions, Graph-cuts algorithm can find a globalsolution within the region between these curves successfully

and then postprocessing step is able to fix any small localimperfections.

Fig. 5. Initial user points and initial generated curves

Note: If the external region enters the black area, those pixelswill be excluded automatically by a fan shape binary maskimage as shown in Fig. 6.

Fig. 6. Binary mask image

The internal and external curves are used as object andbackground hard constraints respectively for the graph-cutsalgorithm. With this approach, the graph-cuts algorithmperforms only within a narrow band between the internal andexternal curves instead of the whole image which results inan excessive computational efficiency.

C. segmentation

We have used graph-cuts algorithm with the followingconstraints for segmentation: The hard constraints are thosetwo curves we generated as described in the previous section.We used the internal contour as the object hard constraint andthe external one as the background hard constraint. Typically,the weights of t-links that connect hard constraint pixels tothe source and sink (s and t) nodes are set to infinity and zero.To use the memory more efficiently, We replaced the infinityas follows: In the function which calculates the weights ofn-links, we save the maximum sum of n-links for each pixel.The weights of t-links for hard constraints needs to be greaterthan this number (even one unit greater will be enough).For the soft constraints, boundary term is as follows:

W = λe−( dIσ )2 (5)

Referreing to the equation 5, W = λB(A). Where dI isthe intensity difference between neighboring pixels p, q. λand σ are two parameters that should be tuned. λ sets thebalance between the first and the second terms in energyfunction E in equation 1. σ can be estimated as cameranoise [5]; Referring to the equation 5, lower σ penalizesdiscontinuities in similar intensity regions.We observed that the GC algorithm is not sensitive to theparameter λ for our ultrasound images. Hence, we set λto 5000. However, the performance is very sensitive to the

Page 4: [IEEE 2009 IEEE Symposium on Computational Intelligence for Image Processing (CIIP) - Nashville, TN, USA (2009.03.30-2009.04.2)] 2009 IEEE Symposium on Computational Intelligence for

parameter σ. In order to avoid setting σ for each imageindividually and with the knowledge that this parameter canbe estimated as ”camera noise” [5], we decided to automatethe process of tuning σ. In our algorithm, the parameter σis calculated automatically for each image as the averageintensity difference on the part of the image which is notmasked. For calculating the average intensity difference,eight neighborhood connection has been used.In case of ultrasound images, selecting a good regional termis difficult. As it is usually based on the intensity models ofobject and background. But in ultrasound images, becauseof imaging artefacts and noise, object and background donot have a well separated intensity histogram. Traditionalregional term [5], [6], does not work well on ultrasoundimages and results in incorrect boundaries. In order toovercome these problems, we have modified the graph-cutsalgorithm to calculate the regional term in its energyformulation as follows: The intensity histogram inside theinternal initial curve and the intensity histogram of theregion between the middle curve and external curve werecalculated; These histograms are shown in Fig. 7.

Fig. 7. (a)Initial curves determine the internal and external regions for theregional term of the energy. (b)Intensity histogram of the internal region.(c)Intensity histogram of the external region.

Although these histograms still do not show a clear sepa-ration, we considered first histogram as the object histogramand intensities higher than the last most probable intensityin the object, as the background intensity (For example inFig. 7(c), intensities higher than 150 are considered as thebackground histogram). Now we are able to set the weightsof the t-links as the probability of an intensity of a pixelbeing the object or the background.

By providing the proposed hard and soft constraints we

were able to make the graph-cuts algorithm less sensitive tothe selection of initial points for the purpose of segmentationof ultrasound images. Fig. 8 shows several examples ofdifferent user initialization. The small differences in theresults are due to the fact that histograms are based on theuser points. In the postprocessing section we present oureffort to further reduce these small differences.

Fig. 8. Graph cuts algorithm is not sensitive to the number and locationof initial user points. Top to bottom: three initial points on the boundary,four initial points on the boundary, three initial points inside the boundary,three initial points outside the boundary

D. Postprocessing

In practice, no segmentation algorithm can guarantee100% accuracy. Thus, it is often necessary to have a simpleway to correct the segmentation if necessary. Within graph-cuts framework, segment editing can be done by placingadditional hard constraints in incorrectly segmented imageareas [6]. The postprocessing stage of our algorithm can beconsidered as a method to automate the process of addinghard constraints.To find the locations that need editing, we evaluate the pointson the resulting boundary of graph-cuts by using a FuzzyInference System (FIS) as explained below.

We use the same method as Nanayakkara et al. [10] forevaluating the points on the boundary. For each point, wecalculate three features in two small circular regions inside

Page 5: [IEEE 2009 IEEE Symposium on Computational Intelligence for Image Processing (CIIP) - Nashville, TN, USA (2009.03.30-2009.04.2)] 2009 IEEE Symposium on Computational Intelligence for

and outside of the boundary as shown in Fig. 9. The lineconnecting the centers of these two circles is perpendicularto the boundary at that point. The radius of these circleshave been set intuitively to r = 7, which seems to givebetter results.

Fig. 9. The region selection for FIS. Adapted from Nanayakkara et al.[10]

The three features used as linguistic variables in our FISare:

• inside histogram: the maximum of the intensity his-togram function in the region of image inside theinternal circle.

• outside histogram: the maximum of the intensity his-togram function in the region of image inside theexternal circle.

• inside distribution: the standard deviation of the inten-sity distribution of inside region.

Based on these features, our proposed FIS has three inputsand one output: insidehist, outsidehist, insidedistand boundary. The output value indicates if the pointbeing evaluated is on a strong boundary or not. The inputmembership functions of the FIS are selected adaptively foreach image as will be explained:MinDist and MaxDist are the minimum and maximum ofinsidedist which are calculated for each region. MinInt isthe minimum of the insidehist. MaxInt is the maximumof outsidehist.Membership functions (MFs) of the two inputs, insidehistand outsidehist, are Gaussian curves with the variance(σ)and mean(µ), for different parts of object(obj), unknownand background(bkg) areas calculated as follows:obj: σ = (MaxInt − MinInt)/10 , µ = MinInt +((MaxInt−MinInt)/20)unknown: σ = (MaxInt − MinInt) ∗ 5/40 ,µ = (MaxInt + MinInt)/2bkg: σ = (MaxInt − MinInt)/10 , µ =MaxInt− ((MaxInt−MinInt)/20)

MFs of the third input, insidedist, are also Gaussiancurves with the variance(σ) and mean(µ), calculated fordifferent quality measure of “good”, “unknown” and “bad”as follows:good: σ = (MaxDist − MinDist)/10 , µ =MinDist + ((MaxDist−MinDist)/20)unknown: σ = (MaxDist − MinDist) ∗ 5/40 ,µ = (MaxDist + MinDist)/2bad: σ = (MaxDist − MinDist)/10 , µ =MaxDist− ((MaxDist−MinDist)/20)

The output MFs are three constant triangles for inside,unknown and outside. The membership functions have beenshown for a sample ultrasound prostate image in Fig. 10.

Note: these formulations are obtained empirically by ana-lyzing several clinical images of prostate and carotid arteryand are based on the fact that in ultrasound images, an organthat we are interested to segment is usually darker than thebackground and its intensity distribution is low.

Fig. 10. Inputs and output membership functions of the FIS

The proposed FIS contains four fuzzy rules:1) IF insidehist is obj AND outsidehist is obj AND

insidedist is good THEN boundary is inside.2) IF insidehist is bkg AND outsidehist is bkg AND

insidedist is bad THEN boundary is outside.3) IF insidehist is unknown AND outsidehist is un-

known AND insidedist is unknown THEN boundaryis unknown.

4) IF insidehist is bkg AND outsidehist is obj ANDinsidedist is unknown THEN boundary is outside.

The output of the FIS is a number between -2 and 2 foreach point. We have to find two thresholds (Th1, Th2)for the output which show that any output less than Th1is inside the boundary and any output higher than Th2 isoutside the boundary.

Fig. 11(a,b) shows the performance of the FIS on a sampleimage with Th1 = −0.9 and Th2 = 0. The green marks arethose points found by FIS to be on the inside of the boundary.The yellow marks are those points found by FIS to be on theoutside of the boundary. These thresholds do not need to beset very carefully as any mis-classifications will be filteredout in next step as described below.

As the result of FIS may not be 100% correct due tothe artefacts and noise effects, once a preliminary classifi-cation of the boundary points is obtained, we re-evaluatethe weak boundary points by computing contrast =mean(Iout)/mean(Iin) along the line perpendicular to theboundary at each point. For the yellow points this computa-tion is done towards the inside and for the green ones towardsthe outside of the boundary. Those points which are moving

Page 6: [IEEE 2009 IEEE Symposium on Computational Intelligence for Image Processing (CIIP) - Nashville, TN, USA (2009.03.30-2009.04.2)] 2009 IEEE Symposium on Computational Intelligence for

Fig. 11. (a) FIS performance: Th1 = −0.9. (b) FIS performance: Th2 =0. (c) points start moving toward stronger boundaries. (d) points with morethan four steps moving kept. (e) Added hard constraints.

to stronger edges considered to be the weak boundary points.In this way, we are able to filter out the mis-classified pointsby FIS. Fig. 11(c,d) shows, how the method works for thesample image.

Finally, these points can be used to add appropriate hardconstraints. For the yellow marked points, we add diameter-length line along the boundary as the background hard con-straints, and for the green marked points, we add diameter-length line along the boundary as the object hard constraints.These new hard constraints for the sample image are shownin Fig. 11(e). Note: The width of the hard constraints ischosen to be 9 pixels.

Once the new hard constraints are determined, we applythe graph-cuts again to recompute a global segmentationthat satisfies these new hard constraints. Fig. 12 shows thefinal result for our sample image. As this figure shows, afteradding the new hard constraints, the resultant blue curve iscloser to the manual red curve which has done by expert.

IV. RESULTS

We applied our proposed algorithm to segment the prostateboundary and carotid artery walls. These clinical imageswere obtained from the Imaging Research Laboratory at theRobarts Research Institute (courtesy of Dr. Aaron Fenster).Evaluation of the proposed algorithm has been carried outby comparing segmented contours with contours drawnby an expert. However, as mentioned in introduction,manual segmentation suffers from several drawbacks andconsidering manual segmentation as a “gold standard” may

Fig. 12. (a) blue curve is the GC segmented boundary. red curve is theexpert’s segmentation. (b) The blue curve is the final result of the proposedalgorithm and the red curve is the manual segmentation done by an expert.

not always be an accurate way of evaluating a segmentationalgorithm. Two types of metrics have been used to evaluatethe performance of the algorithm: distance-based metricsand area-based metrics [11], [9], [10].

A. Distance-based metrics

Consider the algorithm-segmented contour (A) with ver-tices {ai : i = 1...K} and the manually-segmented contour(M) with vertices {mn : n = 1...N}. First of all to get anaccurate measure, both contours were linearly interpolatedto make vertices 1 pixel apart. Then the distance between avertex ai (of the contour A) and contour M is defined by

d(ai,M) = minn

‖ai −mn‖ (6)

Based on this definition, three parameters were calculatedfor each image:

• MAD, the mean absolute difference, which representsthe mean error in segmentation for each image.

MAD =1K

K∑i=1

d(ai,M) (7)

• MAXD, the maximum difference, which represents themaximum error in segmentation for each image.

MAXD = maxi∈[1,K]

{d(ai,M)} (8)

• PC, the percentage of the vertices ai whose distancefrom M is less than 5 pixels. This parameter showsthe percentage of algorithm-segmented contour whichcan be considered as “very close” to the manually-segmented contour.

PC =# of vertices in {ai ∈ A : d(ai, M) < 5 pixels}

K(9)

B. Area-based metrics

Area-based metrics are for comparison of the area en-closed by the algorithm-segmented contour and manually-segmented contour. The different regions between two con-tours are shown in Fig. 13.

Based on these regions, two parameters were calculatedfor each image:

Page 7: [IEEE 2009 IEEE Symposium on Computational Intelligence for Image Processing (CIIP) - Nashville, TN, USA (2009.03.30-2009.04.2)] 2009 IEEE Symposium on Computational Intelligence for

Fig. 13. TP: True Positive, FP: False Positive, TN: True Negative, FN:False Negative

• AO, Area overlap (or accuracy) measures the propor-tional area correctly identified by the algorithm.

AO =TP

TP + FN + FP× 100% (10)

• AD, Area difference (or error) measures the propor-tional area incorrectly identified by the algorithm.

AD =FP + FN

TP + FN + FP× 100% (11)

The algorithm was applied to 121 ultrasound images ofprostate from six different patients. Fig. 14 shows the outputof the algorithm on a sample slice of each patient. Thenumerical results for these sample images are shown in I.

- MAD(pixels) MAXD(pixels) PC(%) AO(%) AD(%)Fig.14 (a) 2.51 7.80 88.50 94.35 5.65Fig.14 (b) 3.28 9.84 87.24 92.92 7.08Fig.14 (c) 5.89 12.39 38.02 74.54 25.46Fig.14 (d) 3.82 18.07 77.05 91.40 8.60Fig.14 (e) 5.62 13.64 42.43 85.81 14.19Fig.14 (f) 3.49 13.38 76.91 90.71 9.29Mean(std) 4.10(1.35) 12.52(3.52) 68.35(22.37) 88.28(7.33) 11.71(7.33)

TABLE IEVALUATION METRICS(MAD, MAXD, PC, AO, AD) FOR SIX SAMPLE

PROSTATE IMAGE SLICES OF SIX DIFFERENT PATIENTS

Fig. 14. Prostate ultrasound image slice, patient1: (a) initial user points,(b) graph-cuts segmentation results (blue curve) and expert’s outline (redcurve), (c) FIS boundary evaluation results, (d) final boundary (blue curve)and expert’s outline (red curve)

We also calculated these metrics for the entire set of theimages. The results of this evaluation is shown in table II.This Table shows the mean and standard deviation (shown in

- MAD MAXD PC(%) AO(%) AD(%)P1 (before PP) 4.77(1.62) 18.47(8.54) 69.24(10.84) 89.62(3.24) 10.37(3.24)P1 (after PP) 3.71(1.50) 14.62(7.36) 77.03(11.10) 91.78(3.20) 8.21(3.20)

P2 (before PP) 3.90(1.82) 15.42(8.77) 77.28(13.73) 91.96(3.53) 8.03(3.53)P2 (after PP) 3.37(1.00) 12.06(4.30) 79.89(11.56) 92.91(2.05) 7.08(2.05)

P3 (before PP) 5.89(1.39) 17.07(6.58) 49.05(10.56) 81.87(4.08) 18.12(4.08)P3 (after PP) 5.57(1.32) 14.94(5.62) 49.02(9.96) 82.60(3.81) 17.39(3.81)

P4 (before PP) 8.40(1.85) 27.83(7.51) 42.51(9.13) 80.64(4.34) 19.35(4.34)P4 (after PP) 7.48(1.80) 23.87(6.32) 46.82(10.80) 82.20(4.77) 17.79(4.77)

P5 (before PP) 8.22(1.65) 26.99(7.64) 43.96(8.77) 80.11(3.77) 19.88(3.77)P5 (after PP) 7.23(1.41) 23.10(7.12) 46.17(7.37) 82.24(2.90) 17.75(2.90)

P6 (before PP) 8.45(1.54) 24.88(6.03) 36.11(10.21) 76.52(4.25) 23.47(4.25)P6 (after PP) 7.90(1.53) 23.39(6.86) 39.29(11.16) 77.89(3.74) 22.10(3.74)

Overall(before PP) 6.81(2.36) 22.06(8.68) 50.70(17.28) 82.58(6.45) 17.41(6.45)Overall(after PP) 6.10(2.24) 19.08(7.87) 53.91(18.12) 84.05(6.32) 15.94(6.32)

TABLE IIMEAN(STANDARD DEVIATION) OF EVALUATION METRICS FOR EACH

GROUP OF IMAGES (PATIENT1-6) AND THE ENTIRE SET OF IMAGES

(BEFORE AND AFTER POSTPROCESSING(PP) STAGE)

parenthesis) of the evaluation metrics for each patient (P1-P6). For each group, we have included the results beforeand after post processing stage. Last two rows show themean and standard deviation of the evaluation metrics overthe entire set of images (121 images) before and afterpost processing stage. The numbers show that FIS in postprocessing stage has always improved the results. The reasonof small improvement is that the initial user points wereselected carefully to provide good constraints for the graph-cuts algorithm. So, once the graph-cuts performance is goodenough, there is only a little improvement left for the FIS.

We also applied our algorithm to the ultrasound images ofcarotid artery. Since we did not have manual segmentationfor all the carotid artery images, we could not calculate theevaluation metrics for them. However, satisfactory operationof the algorithm to find carotid artery walls is shown visually.Fig. 15 shows a sample slice of these images with theresultant boundary of the proposed algorithm.

Fig. 15. Carotid artery ultrasound image slice, patient1: (a) initial userpoints, (b) graph-cuts segmentation results (blue curve) and expert’s outline(red curve), (c) FIS boundary evaluation results, (d) final boundary (bluecurve) and expert’s outline (red curve)

Note: All parts of the software except the graph construc-tion and Max-flow algorithm, have been implemented underMATLAB; The graph construction has been implemented

Page 8: [IEEE 2009 IEEE Symposium on Computational Intelligence for Image Processing (CIIP) - Nashville, TN, USA (2009.03.30-2009.04.2)] 2009 IEEE Symposium on Computational Intelligence for

using C++ and then by using available max-flow library[7] the optimal cut can be calculated. The results havebeen generated by running the code on a computer withAMD Athlon 2.2GHz processor and 1GB of RAM whichtakes around 40 seconds for MATLAB to prepare all theweights for t-links and hard constraints and call the Max-flowalgorithm. The computational efficiency can be improvedsignificantly by implementing all part using C++. Max-flowalgorithm takes around 3 seconds to compute the optimal cut.The FIS is also implemented in MATLAB and takes around10 seconds to complete.

V. CONCLUSIONS AND FUTURE WORK

Recent advances in transducer design, spatio-temporalresolution, digital systems and manufacturing have signifi-cantly improved the quality of ultrasound images [12]. Theseimprovements have led to increased use of ultrasound in notonly its traditional area of application which is diagnosis,but also in image-guided surgery and therapy [3]. Therefore,there is a re-emergence of interest in performing image seg-mentation on ultrasound images [3]. However, this task is stillchallenging due to the intrinsic characteristics of ultrasoundimages such as poor signal-to-noise ratio (SNR) and specklenoise [4]. In this paper, we proposed a semi automaticsegmentation algorithm which can perform segmentation taskon ultrasound images with a high success rate. Because ofthe use of a global optimization algorithm such as graph-cutsmethod, the algorithm has the advantage of escaping localminima which is a problem in other methods such as activecontours and edge based methods.Interactive graph-cuts as proposed by Yuri et al. [5], is a greatmethod by itself for optimal boundary segmentation. But incase of ultrasound images, it tends to fall short. Our goal inthis paper is to improve this method for ultrasound images byperforming a preprocessing step to enhance the image qualityand by introducing a new initialization method which isbeneficial in terms of computational efficiency as it limits thesearch area for the graph-cuts algorithm. Also, our proposedmethod for calculating regional term is more reliable andstable. Furthermore, we have automated the segment editingpart which originally done by user.

The only drawback of the proposed algorithm is that ifinitial internal curve enters into the background area or initialexternal curve enters into the object area, the algorithm doesnot perform well, mostly due to the cost function of theminimization being inaccurate. Our experiments show thatsuch an initial error cannot be overcome by the subsequentGC stage with additional hard constraints provided by theFIS. This is mostly due to the fact that the second stage ofoptimization is really designed to correct small errors andrelies on the initial segmentation performing adequately.Although as chart 2 shows, it is possible to re-evaluate theboundary points repeatedly using the FIS, this method maynot always work as FIS only affects the local neighborhoodand may fall in to local minima if the boundary is far fromthe real intensity edges.

As part of our future work, we would like to automatethe initialization stage based on image features. Havingfully automatic segmentation algorithm is desirable as itcan speed up the segmentation process, specially in 3-Dimage data. And speed is an important factor for mostclinical applications. Additionally, we would like to extendthe method for 3-D to segment volume data by exploiting theability of graph-cuts algorithm to work in higher dimensionalspaces.Since our segmentation algorithm is designed to work off-line, we were not concerned about the running time ofthe algorithm. However, there are lots of new applicationsthat demand on-line segmentations. We believe that theproposed algorithm may have several areas such as fasterimplementations, parallelization etc. where the computationalefficiency can be increased. Also, one possibility to speedup the algorithm is using active contours instead of fuzzyinference system. In this case, the global solution of graph-cuts algorithm can be used as the initial contour for an activecontour performance. The most important advantage of thismethod is its speed as active contour algorithms are muchfaster than FIS. Finally, we would like to use more reliableand accurate evaluation techniques, other than using manualsegmentation as a “gold standard”, to test our algorithm.

VI. ACKNOWLEDGEMENT

This work was financially supported by Precarn Inc., Nat-ural Sciences and Engineering Research Council of Canada(NSERC) and the University of Western Ontario.

REFERENCES

[1] T. McInerney, D. Terzopoulos, “Deformable Models in medical imageanalysis: A survey,” Med. Imag. Analysis, 1(2):91–108 1996.

[2] F. Shao, K. V. Ling, W. S. Ng, R. Y. Wu, “Prostate boundary detectionfrom ultrasonographic images,” J. Ultrasound Med., 22(6):605–623,June 2003.

[3] J. A. Noble, D. Boukerroui, “Ultrasound image segmentation: a survey,”IEEE Trans. Med. Imag., 25(8):987–1010, August 2006.

[4] S. D. Pathak, V. Chalana, D. R. Haynor, Y. Kim, “Edge guided boundarydelineation in prostate ultrasound images,” IEEE Trans. Med. Imag.,19(12):1211–1219, Dec. 2000.

[5] Y. Boykov, M. P. Jolly, “Interactive graph-cuts for optimal boundaryand region segmentation of objects in N-D images,” Proceeding of Int.Conf. Computer Vision, 1:105–112, July 2001.

[6] Y. Boykov, G. Funka-Lea, “Graph cuts and efficient N-D image seg-mentation,” Int. J. Computer Vision, 70(2):109–131, 2006.

[7] Y. Boykov, V. Kolmogorov, “An exprimental comparison of min-cut/max-flow algorithms for energy minimization in vision,” IEEETrans. PAMI, 26(9):1124-1137, September 2004.

[8] S. D. Pathak, P. D. Grimme, V. Chalana, Y. Kim, “Pubic arch detectionin transrectal ultrasound guided prostate cancer therapy,” IEEE Trans.Med. Imag., 17:762–771, Oct. 1998.

[9] H. M. Ladak, Y. Wang, D. B. Downey, D. A. Steinman, A. Fenster,“Prostate boundary segmentation from 2D ultrasound images,” Med.phys., 27(8):1777–88, Aug. 2000.

[10] N. D. Nanayakkara, J. Samarabandu, A. Fenster, “Prostate segmen-tation by feature enhancement using domain knowledge and adaptiveregion based operations,” Phys. Med. Biol., 51:1831–1848, March 2006.

[11] B. Chiu, G. H. Freeman, M. M. A. Salma, A. Fenster, “Prostatesegmentation algorithm using dyadic wavelet transform and discretedynamic contour,” Phys. Med. Biol., 49(21):4943–4960, October 2004.

[12] S. L. Bridal, J. M. Correas, A. Saied, P. Laugier, “Milestones onthe road to higher resoloution, quantitative and functional ultrasonicimaging,” Proc. IEEE, 91(10):1543-1561, Oct. 2003.