[IEEE 2011 Fourth International Workshop on Advanced Computational Intelligence (IWACI) - Wuhan, China (2011.10.19-2011.10.21)] The Fourth International Workshop on Advanced Computational Intelligence - An investigate of mass diagnosis in mammogram with random forest
Post on 18-Dec-2016
An Investigate of Mass Diagnosis in Mammogram with Random Forest
AbstractCorrect mass diagnosis in mammogram can
reduce the unnecessary biopsy without increasing false negatives. In this paper, we investigated the usage of random forest classifier for the classification of masses with geometry and texture features. Before extracting features, the mass regions need to be extracted. Based on the initial contour guided by radiologist, level set segmentation is used to deform the contour and achieves the final segmentation. The proposed level set method integrated both region information and boundary information, and with a level regularization term, it can achieve accurate segmentation. Modified Hu moments were used for shape characteristics, and GLCM (Gray Level Co-occurrence Matrix) features are used for texture characteristics. Random forest, a recently proposed ensemble learning method, for the first time, is investigated for the mass classification, and is compared with SVM (Support Vector Machine). Mammography images from DDSM were used for experiment. The new method based on the level set segmentation and the features achieved a zA value of 0.86 with SVM and 0.83 with random forest. The experimental result shows that random forest is a promising method for the diagnosis of masses.
I. INTRODUCTION1 REAST cancer causes a lot of death each year in world. An effective approach to reduce the death is to
treat the disease early. With the development of medical imaging technique, it has provided different ways for doctors to diagnose breast cancer earlier. Among different noninvasive ways to diagnosis breast cancer, such as MRI (Magnetic resonance imaging), ultrasound, mammography is a cheap and effective method . It was shown that mammographic screening can lower the death rate by more than 30% . But it has also been found that only about 30% patients undergone biopsy after mammographic screenings have real malignant masses . That is, a lot of patients suffered unnecessary painful biopsy due to the wrong diagnosis of masses, computer aided diagnosis has the potential to help doctors to improve the diagnosis accuracy. A lot of researchers have investigated the problem of
classifying mass into malignant or benign classes. Ranagayyan et al.  investigated the classification of mass based on morphological features with manually delineated boundary. Features quantifying the extent of the speculated nature of the boundary and the degree of narrowness of the spicules were extracted. Combined with a global measure of the boundary complexity, they
Manuscript received July 11, 2011. This work was supported by NSF
of Hubei Province, China (NO. 2008CDB345), Educational Commission of Hubei Province (NO. D20091102), as well as Educational Commission of Hubei Province (NO.Q20101101).
Jun Liu, Jianxun Chen, Xiaoming Liu and J Tang are with the College of Computer Science and Technology, Wuhan University of Science and Technology, Hubei, China.(email:firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com) * means corresponding authors.Tel/Fax:+862768862353
achieved an overall accuracy of 82% with 0.79zA = on a dataset with 28 benign masses and 26 malignant tumors. Sahiner et al.  utilized rubber band straightening transform (RBST) to characterize mammographic masses as malignant or benign. After region of interest (ROIs) containing the biopsied masses were extracted by an experienced radiologist, a clustering technique was employed to segment mass object from background tissue. Texture features were extracted based on spatial gray-level dependence matrices and run-length statistics matrices from different regions related to segmented masses. Linear discriminant analysis with stepwise feature selection procedure was used for classification. On a dataset of 168 mammograms, they found that the features extracted from RBST was effective than extracted from the original images, and an accuracy of 0.94zA = under the ROC curve was obtained. Eltoukhy et al. compared the wavelet and curvelet transforms for breast cancer diagnosis. Regions of interest of size 128 128 were first extracted from MIAS dataset, then each image was decomposed using wavelet and curvelet transforms, the 100 biggest coefficients were extracted from each decomposition level, finally, a nearest neighbor classifier is used for the final classification. They found that curvelet transform is better than wavelet transform for the diagnosis of masses.
Classification is an important problem for the diagnosis of masses; a lot of methods have been applied to the problem, such as neural network, boosting, support vector machines (SVM), and achieved good performances. A problem of these methods is that predictors based on these techniques are hard to interpret since they can not reflect the importance of each variable. While for medical diagnosis, the transparency is very important. On the other hand, decision tree based methods are easy to explain the result. A shortcoming of decision tree based methods is they are sensitive to small perturbations in the learning set. Integrating bagging with decision tree method can partially overcome the problem . Random forest is a typical method proposed by Breiman  combining the random subspace method and bagging. Random forests have been used for several tasks, such as classification of hyper-spectral data , Alzheimers disease diagnose based on single photon emission computed tomography (SPECT) data, handwritten digits recognition, etc, and show good performance.
Different features have been investigated for the classification of masses, such as GLCM (Gray Level Co-occurrence Matrix) texture features, shape features from the boundaries, histogram based features, et al. In this paper, we used both the texture and geometry features, we adopted GLCM features for texture representation since the components in these features have meaningful explanation, which is beneficial for the diagnosis of masses. For the shape representation, we used a modified Hu moment representation , which is very
Fourth International Workshop on Advanced Computational Intelligence Wuhan, Hubei, China; October 19-21, 2011
978-1-61284-375-9/11/$26.00 @2011 IEEE
Jun Liu, Jianxun Chen, Xiaoming Liu, and J Tang
efficient compared to traditional Hu moments. For the classification step, we introduced the random forest classifier into the mass diagnosis problem and compared it with SVM. To our knowledge, this is the first time for the random forest classifier to be applied into the mass classification problem.
II. MASS SEGMENTATION WITH LEVEL SET
Active contour methods (Snakes) are very popular methods for image segmentation. It deforms on the image domain and achieves the segmentation by minimizing an energy functional, which consists usually an internal energy and an external energy. The internal energy controls the smoothness of the contour and the external energy attracts the contour to the interesting features (object boundaries for example). The method can be implemented either explicitly or implicitly. In the explicit method, the contour is represented with discrete points and point positions are modified during evolution, and in the implicit method, the contour is represented with a level set function in higher dimension and the whole level set function evolves during the process. Generally speaking, the explicit approach is more efficient but less flexible, while the contrary applies to the implicit approach.
In our previous work , we have used level set method for the segmentation of masses and achieved good performance. Following the previous work, the energy function we used to evolve the contour is defined as -:
( )( )
21 2 1 1
( , , ) ( ) ( ) ( ) ( ( ))
( ) ( ) ( ) (1 ( ( ))
( ) (1 ( ( )))
( ) ( ( ))
1( ( )) ( ) ( ) 12
E f f K x y I y f x H y dy dx
K x y I y f x H y dy dx
I x c H x dx
I x c H x dx
H x dx v g dx w x dx
+ + +
The first 4 terms in the energy function are related to both global and local data fitting energy, the 5th term is related to length of contour, the 6th term is about edge function and the last term is a regularization  which makes the functional remain approximating a signed distance function during curve evolution, thus avoiding numerical instability problem, and speed up the calculation. For detail explanations of the parameters and notations, please refer to -. Figure 1 shows an example of our segmentation result on a ROI containing a malignant mass, it can be seen that our method can accurately locate the mass boundaries.
III. TEXTURE AND GEOMETRY FEATUREAS EXTRACTION It is known that a typical benign mass has a round,
smooth, and with well-circumscribed boundary, while a malignant tumor usually has a spiculated, rough and blurry boundary . Thus, boundary analysis has been widely used for the benign or malignant classification of masses. Besides, the presence of masses cause architectural distortion in the surrounding tissues of the mammography, thus, texture features could also bear discriminate information.
It has been found that more discriminant information is located on the boundary region of the mass instead of the
far inner or outer region; thus, we extract texture features in a band around the closed contour of the mass, as shown in Fig.1. The width of extracted ribbon is limited to be 8mm across the boundary (4mm each side). In this paper, for the texture features, we use features computed from GLCM (Gray Level Co-occurrence Matrix) . A GLCM matrix M is a G G matrix, whose rows and columns are indexed by the image grey levels 1, ,i G= , where
2nG = for a n -bit image. An element , ( , )dM i j reflects the distribution of the probability of occurrence of a pair of gray levels ( , )i j separated by a given distance ( )d and in a specific direction . Several features are calculated from the GLCM matrix, as shown below .
In the GLCM expression, we reduce the gray level G into 16 to reduce the computation and make it more robust (to avoid a lot of 0 values in the matrix). In our study, as usual, four GLCMs are constructed by scanning each mass ribbon in the 0 , 45 , 90 and 135 directions with pixel distance set to 1. The seven features described above are calculated for each direction, thus obtain 28 features.
(c) (d) Fig. 1. An example of feature extraction based on level set active contour segmentation. (a) a region containing a malignant mass; (b) segmentation result on the ROI; (c) extraction of normal pixels on the boundary of the mass; (d) band of pixels across the boundary of the mass extracted for texture features.
Besides the texture features, a few shape features are also utilized in our method. Hu moment invariants  are widely used for shape analysis, but it is time consuming since it calculate features utilizing all the pixel values in a region. Chen  proposed an improved moment invariants, which uses only the boundary information, thus very efficient. Let ( , )f x y be 1 over a closed and bounded region R and 0 otherwise. Define the moments
, for p,q=0,1,2,3,...p qpqC
m x y ds=
is a line integral along the curve C ,
2 2( ) ( )ds dx dy= + . The central moments are defined as
10 01( , ) 00 00
( ) ( ) , ,p qpqx y C
m mx x y y where x y
= = =
(3) The normalized central moments is defined as
, where = 1, p+q=2,3,2
p q for
(4) As in Hu moments , the improved moments are defined as :
1 20 02 = + (5) 2 2
2 20 02 11( ) 4 = + (6) 2 2
3 30 12 03 21( 3 ) ( 3 ) = + (7) 2 2
4 30 12 03 21( ) ( ) = + + + (8) 2 2 2 2
5 30 12 30 12 30 12 21 03 21 03 21 03 30 12 21 03( 3 )( ) ( ) 3( ) (3 )( ) 3( ) ( ) = + + + + + + +
(9) (9) 2 26 20 02 30 12 21 03 11 30 12 21 03( ) ( ) ( ) 4 ( )( ) = + + + + +
(10) (10) 2 27 21 03 30 12 30 12 21 03 12 30 21 03(3 )( ) ( ) 3( ) (3 )( ) = + + + + +
(11) (11) The quantities, ,1 7i i are invariant to scaling, translation and rotation.
IV. CLASSIFY WITH RANDOM FOREST Various ensemble classification methods have been
proposed recently to improve the classification accuracy. Several classifiers are trained and their results are combined with voting to get a final class label in ensemble classification. Boosting and bagging are widely used ensemble classification methods.
The random forest (RF) classier  uses bagging, or boostrap aggregating, to form an ensemble of classification tree (CART, classification and regression tree) classifiers ( , ), 1, ,kh x T k = where kT are independent identically distributed random vectors , and x is an input pattern. CART is a rule-based method that generates a binary tree through a recursively binary partitioning process, which splits a node based on the predictors. A shortcoming of the CART method is that it may overfit to training samples and performs badly on unseen data set. RF introduces the bagging mechanism into CART and can greatly alleviate the overfitting problem. In RF training, the random forest algorithm creates multiple CART-like trees, each trained on a bootstrapped sample of the original training data, and for each node, it searches only across a randomly selected subset of the input variables to determine the split. The output of the classifier is determined by a majority vote of the trees.
Let N denotes the number of training cases, M denotes the number of variables in the classifier, m ( m M , usually given by user) denotes the number of input variables to determine the decision at a node of the tree. As in bootstrap, choose a training set for this tree by choosing N times with replacement, and use the rest cases as validating set to estimate the error of the tree. Roughly speaking, each tree is constructed as follows, a) For each node of the tree, randomly choose m variables on which to evaluate the decision at that node, and
calculate the best split based on these m variables in the training set; b) Each tree is fully grown and not pruned.
The random forest method has several advantages, it can give estimates of what variables are important in the classification, run efficiently on large data bases, have an effective method to estimate missing data. In this paper, we investigate its usage for the diagnosis of masses.
V. EXPERIMENTAL RESULTS We extracted 236 ROIs from the DDSM dataset images
for the experiments. Several parameters are involved in the level set based segmentation method, the values are fixed as follows: 1 2 1.0 = = , 1 2 2.0 = = ,
20.01 255 = , 1.0 = , 1.0 = . The terminal of level set based segmentation is determined by user manually with visual inspection. Besides the random forest based classifier, as a comparison, we also tested the performances with support vector machine (SVM).
For the RF classifier, the number of trees and the number of variables split on each node are important variables. Table 1 lists the classification accuracies for the RF with different settings on modified Hu moments (with 7 features). The overall accuracy seemed to be insentive to variable settings. This is important since the classifier can be run with very little human guidance to select the parameter values. From the experiments, we can see that RF is extremely fast. Using an Intel dual Core 2.1 GHz laptop, it took under less than 0.01 seconds to train and classify the data set. Similar performance was observed on GLCM features.
For both the SVM and RF classifier, a few parameters need to be set. With SVM classifier, the performances depend on a few parameters, such as misclassification penalty parameter C and in RBF, we searched the values of parameter C and in 5 5[2 ,2 ] , and the optimal values are selected with 5-fold cross-validation. That is, the training samples are spitted into 5 equal sized sets, 4 of them are used for training and the left 1 set is used for testing, for each testing C and , 5 classification are performed, and the values of C and with highest averaged accuracy is selected for the following testing. The libsvm  is used for the experiment.
CLASSIFICATION ACCURACIES FOR THE RF CLASSIFIER AND ITS TRAINING AND TESTING TIME COST
Trees Split variables
Train time (seconds)
Test time (seconds)
Test set acc. (%)
10 2 1.4e-2 8.1e-3 68.78%
10 3 1.5e-2 8.3e-3 68.33%
20 2 1.7e-2 8.1e-3 69.23%
20 3 2.0e-2 8.2e-3 69.68%
50 2 2.5e-2 8.2e-3 67.87%
50 3 2.7e-2 8.3e-3 68.78%
100 2 3.9e-2 8.1e-3 68.33%
100 3 4.2e-2 8.3e-3 69.23%
We compare SVM with RF on features integrating texture features (28+7=35 dimensional) and shape features of modified Hu moments. In this part, we also use 5 cross validation to select parameter values of both SVM and RF. Table 2 shows the performances of both SVM and RF classifiers. The shape features quantify the information about the boundary curve, while the texture features convey the information from a region of the boundary. Thus, their information is complementary to some extent. With SVM on the 236 images, we achieved an average accuracy of 81%, and the average accuracy with RF is 79%, their zA values are 0.86 and 0.83 respectively. Although the performance of RF is a little worse than SVM, an advantage of RF is its speed. On the data set with cross validation to select parameter values and leave one out performance measure, RF takes only about half of the time of SVM method. Thus, if quick response is important, RF is a good candidate.
CLASSIFICATION RESULTS WITH TEXTURE FEATURES FROM LEVEL SET BASED SEGMENTATION
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
False Positive Fraction
SVM, Az=0.8621RF, Az=0.8305
Fig. 2. The ROC curves of SVM and RF classifiers with integrated
In this paper, we investigated the level set active contour segmentation of mass based classification in digital mammograms. After segmentation, morphological and texture features are extracted from the boundary and a ribbon region around the boundary. For the classification, both SVM and RF (random forest) are investigated for classification. This is the first time for RF utilized to classify masses. The experiments are tested using a database of 236 clinical mammograms. Results show that the performance of RF is comparable to SVM, while it has the advantages in efficiency.
REFERANCES  J. Tang, R. Rangayyan, J. Xu, I. Naqa, and Y. Yang,
"Computer-aided detection and diagnosis of breast cancer with mammography: Recent advances," IEEE Transactions on Information Technology in Biomedicine, vol. 13, pp. 236-251, 2009.
 A. Jemal, L. Clegg, E. Ward, L. Ries, X. Wu, P. Jamison, P. Wingo, H. Howe, R. Anderson, and B. Edwards, "Annual report to the nation on the status of cancer, 1975-2001, with a special feature regarding survival," Cancer, vol. 101, pp. 3-27, 2004.
 H. D. Nelson, K. Tyne, A. Naik, C. Bougatsos, B. K. Chan, and L. Humphrey, "Screening for breast cancer: an update for the US Preventive Services Task Force," Annals of Internal Medicine, vol. 151, p. 727, 2009.
 H. Chan, B. Sahiner, M. Helvie, N. Petrick, M. Roubidoux, T. Wilson, D. Adler, C. Paramagul, J. Newman, and S. Sanjay-Gopal, "Improvement of Radiologists' Characterization of Mammographic Masses by Using Computer-aided Diagnosis: An ROC Study," Radiology, vol. 212, pp. 817-827, 1999.
 R. Rangayyan, N. Mudigonda, and J. Desautels, "Boundary modelling and shape analysis methods for classification of mammographic masses," Medical and Biological Engineering and Computing, vol. 38, pp. 487-496, 2000.
 B. Sahiner, H. Chan, N. Petrick, M. Helvie, and M. Goodsitt, "Computerized characterization of masses on mammograms: the rubber band straightening transform and texture analysis," Medical Physics, vol. 25, p. 516, 1998.
 M. Meselhy Eltoukhy, I. Faye, and B. Belhaouari Samir, "A comparison of wavelet and curvelet for breast cancer diagnosis in digital mammogram," Computers in Biology and Medicine, vol. 40, pp. 384-391, 2010.
 T. G. Dietterich, "An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization," Machine learning, vol. 40, pp. 139-157, 2000.
 L. Breiman, "Random forests," Machine learning, vol. 45, pp. 5-32, 2001.
 J. Ham, Y. Chen, M. M. Crawford, and J. Ghosh, "Investigation of the random forest framework for classification of hyperspectral data," Geoscience and Remote Sensing, IEEE Transactions on, vol. 43, pp. 492-501, 2005.
 J. Ramirez, J. Grriz, R. Chaves, M. Lpez, D. Salas-Gonzalez, I. Alvarez, and F. Segovia, "SPECT image classification using random forests," Electronics letters, vol. 45, pp. 604-605, 2009.
 R. Haralick, I. Dinstein, and K. Shanmugam, "Textural features for image classification," IEEE Transactions on systems, man, and cybernetics, vol. 3, pp. 610-621, 1973.
 M. K. Hu, "Visual pattern recognition by moment invariants," Information Theory, IRE Transactions on, vol. 8, pp. 179-187, 1962.
 C. C. Chen, "Improved moment invariants for shape discrimination," Pattern Recognition, vol. 26, pp. 683-686, 1993.
 T. Chan and L. Vese, "Active contours without edges," IEEE Transactions on Image Processing, vol. 10, pp. 266-277, 2001.
 L. Chunming, K. Chiu-Yen, G. John C., and D. Zhaohua, "Minimization of Region-Scalable Fitting Energy for Image Segmentation," IEEE Transactions on Image Processing, vol. 17, pp. 1940-1949, 2008.
 L. Xiaoming, L. Jun, Z. Dongfeng, and J. Tang, "A Benign and Malignant Mass Classification Algorithm Based on an Improved Level Set Segmentation and Texture Feature Analysis," in Bioinformatics and Biomedical Engineering (iCBBE), 2010 4th International Conference on, 2010, pp. 1-4.
 J. Tang and X. Liu, "Classification of Breast Mass in Mammography with an Improved Level Set Segmentation by Combining Morphological Features and Texture Features," Multi Modality State-of-the-Art Medical Image Segmentation and Registration Methodologies, pp. 119-135, 2011.
 L. Breiman, Classification and Regression Trees: Chapman & Hall/CRC, 1984.
 C. Chang and C. Lin, "LIBSVM: a library for support vector machines," 2001.
Benign Malignant Total zA SVM 0.74(84/113) 0.88(108/123) 0.81(192/236) 0.86
RF 0.77(87/113) 0.80(99/123) 0.79(186/236) 0.83