in knowledge discovery and data mining: …jzhang/papers/jzhang_mining...in knowledge discovery and...

18

Upload: others

Post on 19-Jul-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112
Page 2: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112
Page 3: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 4: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

if a>10 and b<20 then Class 3), which is suitable for human interpretation and evaluation. In the past years, DT has gained considerable research interests in analysis of remotely sensed image data, such as automated knowledge-base build-ing from remote sensing and GIS data (Huang & Jensen, 1997), land cover classification (Friedl & Brodley, 1997), soil salinity analysis (Eklund, Kirkby, & Salim, 1998), change detection in an urban environment (Chan, Chan, & Yeh, 2001), building rule-based classification systems for remotely sensed images (Lawrence & Wright, 2001) and knowledge discovery from soil maps (Qi & Zhu, 2003). In particular, DT has been employed for global land cover classifications at 8km spatial resolution (De Fries, Hansen, Town-shend, & Sohlberg, 1998) using NOAA AVHRR data. Interestingly, DT has also been adopted as the primary classification algorithm to generate global land cover maps from NASA MODIS data (Friedl et al., 2002) where spatial and radiometric attributes have been significantly improved.

In ideal situations, each leaf node contains a large number of samples, the majority of which belongs to one particular class called the domi-nating class of that leaf node. All samples to be classified that fall into a leaf node will be labeled as the dominating class of that leaf node. Thus the classification accuracy of a leaf node can be measured by the number of the actual samples of the dominating class over all the samples in its leaf node. However, when there are no dominat-ing classes in the leaf nodes, class labels are as-signed based on simple majority vote and, hence, the decision tree nodes have low classification accuracy.

While DT has gained considerable applica-tions, the resulting decision trees from training datasets could be complex due to the complex relationship between features and classes. They are often the mixtures of the branches with high and low classification accuracies in an arbitrary manner and are difficult for human interpretation. In this study, we propose to apply DT multiple

times to a training dataset to construct more interpretable decision trees while attempting to improve classification accuracy. The basic idea is to keep classification branches of a resulting decision tree that have high classification accuracy while combining samples that are classified under branches with low classification accuracy into a new training dataset for further classifications. The process is carried out in a successive manner and we term our approach as successive decision tree (SDT). For notation purposes, we also term classic DT as CDT.

The heuristics behind the expectation that SDT can increase classification accuracy are based on the following observation. There are samples in a multi-class training dataset, although their patterns may be well perceived by human, they are small in sizes and are often assigned to vari-ous branches during the classification processes according to information entropy gain or gain ratio criteria. At some particular classification levels, the numbers of the samples may be below predefined thresholds in decision tree branches to be qualified as a decision tree leaf node with high classification accuracy, thus the splitting processes stop and they are treated as noises. On the other hand, if we combine these samples into a new dataset, since the distribution of the new dataset may be significantly different from the original one, meaningful classification rules may be derived in a new decision tree from the new dataset. By giving some samples a second chance to be correctly classified, the overall ac-curacy may be improved. The heuristics will be further illustrated through an example in “The Method.”

The proposed SDT method is different from existing meta-learning approaches that are applied to DT, such as the boosting (Freund & Schapire, 1997) DT approach (Friedl, Brodley, & Strahler, 1999; Pal & Mather, 2003). Boosting DT gives higher weights to the samples that have been misclassified in a previous process but uses all the samples in all the classification processes. Boost-

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 5: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

ing DT does not aim at generating interpretable decision rules. In contrast, the proposed SDT approach generates compact decision rules from decision tree branches with high classification accuracy; only samples that cannot be general-ized by decision rules with high classification accuracy are combined for further classifications. By saying “generalize” we mean fitting samples into the leaf nodes of decision tree branches (i.e., decision rules).

This chapter is arranged as follows. In the section “The Method,” we present the proposed SDT approach which begins with a motivating example and is followed by the algorithm descrip-tion. In “Experiments,” we test SDT using two real remotely sensed datasets and demonstrate SDT’s capability in generating compact classifica-tion rules and improving classification accuracy. “Discussion” is dedicated to the discussions of several parameters involved in SDT. Finally, the last section is “Summary and Conclusion.”

the Method

In this section, we will first introduce the principles of decision tree using the example shown in Figure 1. We then use the example to demonstrate the effectiveness of the proposed SDT approach and finally we present the algorithm.

thedecisiontreePrinciple

The decision tree method recursively partitions the data space into disjoint sections using impurity measurements (such as information gain and gain ratio). For the sake of simplicity, binary partition of feature space is adopted in implementations, such as J48 in WEKA (Witten & Frank, 2000). Let f(Ci) be the count of class i before the parti-tion and f(Ci

1) and f(Ci2) be the counts of class i in

each of the two partitioned sections, respectively.

Further let C be the total number of classes,

1( )

C

ii

n f c=

=∑ , 11

1( )

C

ii

n f c=

=∑ ,and 22

1( )

C

ii

n f c=

=∑ , then the information entropy before the partition is defined as

1

( ) ( )* log( )C

i i

i

f c f cen n=

= −∑ . Cor-respondingly the entropies of the two partitions

are defined as 1) 1

11 1 1

( ( )* log( )C

i i

i

f c f cen n=

= −∑ and 2 2

21 2 2

( ) ( )* log( )C

i i

i

f c f cen n=

= −∑ , respectively. The

overall entropy after the partition is defined as the weighted average of e1 and e2, that is,

1 21 2_ * *n nentropy partition e e

n n= +

The Information gain then can be defined as:

_ _entropy gain e entropy partition= −

The gain ratio is defined as:

___

entropy gaingain ratioentropy partition

=

For the example shown in Figure 1a, there are 24 samples in the two dimensional data space (x,y) and 3 classes represented as the black circle, square and triangle, the sizes of which are 10, 10 and 4, respectively. We use x and y as two features for classification. For notational convenience, the pure regions in the data space are numbered 1 though 8 as indicated by the white circles in Figure 1a.

Suppose the minimum number of samples in each of the decision tree branches is four. The larg-est information gain is obtained when partitioned at x≤3 (root partition) according to the partition principle. For the left side of the root partition, the largest information gain is obtained when partitioned at y≤3 where the top part is a pure region which does not require any further parti-tion. Unfortunately, the three classes have equal

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 6: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

portions in the lower part of the partition (x≤3, y≤3) and the further partitions at (x≤1, y≤3) and (x≤2, y≤3) result in the same information gain. The similar situation happens to the right part of the root partition (x>3).

If we prune the resulting decision tree and only keep the partition at the root, then any samples in section x≤3 will be classified as class 1 (represented as black circles in Figure 1a), with a classification accuracy of 8/12=67% since the dominating class 1 has 8 samples while there are 12 samples in section x≤3. However, if we prune the resulting decision tree at level 2, then any samples in section x≤3 and y≤3 will be assigned to one of the three classes arbitrarily chosen. The classification accuracy in the resulting section will be only 2/6=33%. The same low classification accuracy also happens to the samples in section 4≤x<6 and y≤3 if pruned at level 2. In the mean-time, the decision rule (2<x≤4, 2<y≤4)class 3

(represented as black triangles) with 100% clas-sification accuracy is missed in CDT.

theMotivatingexample

In Figure 1, if we determine that the two level-2 sections (1≤x<4, 1≤y<3) (i.e., the combinations of regions 3, 4 and 5) and (4≤x<6 and 1≤y<3) (i.e., the combinations of regions 6, 7 and 8) do not meet our classification accuracy expectation, they can be removed from the resulting deci-sion tree (T1), and the samples that fall into the sections can be combined into a new dataset for further classification. The new resulting decision tree (T2) can successfully find the decision rule (2<x≤4, 2<y≤4)class 3 (represented as black triangles) that is missed by the CDT approach. The extra cost of the SDT approach is to have the nodes in T1 that represent the sections with low classification accuracies point to the root of

Figure 1. Example to illustrate the CDT and SDT approaches: (a) sample data, (b) resulting CDT tree, (c) resulting SDT tree

T1’

1 2

3 4 5 6

1 2 3 4 5 6

2

3

4

5

1

x

y

7 8

(a)

1

5

2

8

3 4 6 7

Level 1 (root)

Level 2

T1

(b)

1 2

3 4

7 8

5+6

T2 (T2’)(c)

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 7: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

T2, which is rather simple in programming lan-guages (such as Java or C/C++). In this chapter, we use Ti’ to denote the modified decision tree of Ti. All Ti’ form a tree chain and we term it as SDT chain. Note that the last decision tree in a SDT chain (Tk’) is the same as its corresponding original decision tree (Tk) since no modification is performed. In Figure 1(c), T2

’ is the same as T2 since it is the last decision tree in the SDT tree chain and no branches are removed any more. We next compare CDT and SDT approaches based on the example in terms of classification accuracy and tree interpretability.

To comply with the practices in classifying remotely sensed image data, we measure the

classification accuracy based on the percentage of testing samples that are correctly classified using the resulting classic decision tree (CDT) or tree chain (SDT). For testing the example in Figure 1, we use the training dataset also as the testing dataset since all the samples have been used as the training data (we will use separate training and testing data in the experiments us-ing real datasets). We measure the classification accuracy as the ratio of number of correctly classified samples over the number of samples under a decision tree node (leaf or non-leaf). In the example, if we set the minimum number of objects to 2, both SDT and CDT achieve 100% accuracy. However, if we set the minimum number of objects to 4, CDT achieves 16/24=66.7% ac-curacy and SDT achieve 20/24=83.3% accuracy. The corresponding resulting DTs and accuracies of their leaf nodes are shown in Figure 2. From the results we can see that SDT achieves much higher accuracy than CDT (83.3% vs. 67.7%). Meanwhile, more tree nodes with dominating classes, that is, more meaningful decision rules, are discovered.

To the best of our knowledge, there are no established criteria to evaluate the interpretability of decision trees. We use the number of leaves and the number of nodes (tree size) as the measure-ments of the compactness of a decision tree. We assume that a smaller decision tree can be better interpreted. For the full CDT tree and SDT tree shown in Figure 1, CDT has 8 leaves and 15 nodes. By omitting the nodes that only have a pointer (or a virtual node), the first level of SDT tree has 2 leaves and 5 nodes and the second level of SDT tree has 5 leaves and 9 nodes. Each of the two trees is considerably smaller than the CDT tree. While we recognize that there are multiple trees in a SDT tree chain, we argue that, based on our experiences, multiple smaller trees are easier to interpret than a big tree.

In addition, contrary to CDT trees where branches are arranged in the order of construc-tion without considering their significances, the

Figure 2. Accuracy evaluations of the example resulting (a) CDT tree, (b) SDT tree

1+3+4+5

T1 X=3

2+6+7+8 8/12=66.7% 8/12=66.7%

1 2

3+4

5+6

T1’

T2 (T2’)

X=3 Y=3

Y=3

7+8

X=2

X=4

2/4=50%

2/4=50% 2/4=50%

6/6=100% 6/6=100%

(a)

(b)

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 8: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

resulting SDT trees naturally leveraged decision branches with high classification accuracy to the top and can catch user’s attention immediately. Figure 3 shows the resulting CDT and SDT trees in text format corresponding to those of Figure 1(b) and 1(c). The dots (“...”) in Figure 3 (b) denote the pointers to the next levels of the SDT tree. We can see that the two most significant decision rules (x<=3, y>3)1 and (x>3, y>3)2 are buried in the CDT tree while they are correctly identified in the first decision tree of the SDT tree chain and presented to users at the very beginning of the interpretation process.

While not being able to show the advantages in terms of classification accuracy and tree in-terpretability at the same time, the motivating example demonstrates the ideas of our proposed approach. By removing decision tree branches with low classification accuracies, combining the training samples under the branches into a new dataset and then constructing a new decision tree from the derived dataset, we can build a decision tree chain efficiently by successively applying the decision tree algorithm on the original and derived datasets. The resulting decision tree chain

potentially has the advantages of being simple in presentation forms, having higher classification accuracy and sorting decision rules according to their significances for easy user interpreta-tions. We next present the SDT approach as a set of algorithms. The algorithms are implemented in the WEKA open source data mining toolkit (Witten et al., 2000).

theAlgorithm

The SDT algorithm adopts the same divide-and-conquer strategy and can use the same information entropy measurements for partitioning as those of the CDT algorithms. Thus the structure of the SDT algorithm is similar to that of CDT. The overall control flow of SDT is shown in Figure 4. The algorithm repeatedly calls Build_Tree to construct decision trees while combing samples that cannot be generalized into new datasets (D) for further classifications. SDT will terminate under three conditions: (1) the predefined maxi-mum number of classifications (i.e., the length of the SDT tree chain) is reached, (2) the number of samples to be used to construct a decision tree is below a predefined threshold, and (3) the newly combined dataset is the same as the one in the previous classification, which means no samples can be used to generate meaningful decision rules during this round. In all the three cases, if there are still samples that need to be classified, they are sent to CDT for final classification.

The function Build_Tree (Figure 5) recursively partitions a dataset into two and builds a deci-sion tree by finding a partition attribute and its partition value that gives the largest information gain. There are several parameters used in func-tion Build_Tree. Min_obj1 specifies the number of samples to determine whether the branches of a decision tree should be considered to stop or continue partitioning. min_obj2 specifies the minimum number of samples for a branch to be qualified as having high classification accuracy. Min_accuracy specifies the percentage of samples

Figure 3. Resulting trees in text format of the example (a) CDT tree, (b) SDT tree

x <= 3 | y <= 3 | | x <= 1: 2 (2.0) | | x > 1 | | | x <= 2: 1 (2.0) | | | x > 2: 3 (2.0) | y > 3: 1 (6.0) x > 3 | y <= 3 | | x <= 4: 3 (2.0) | | x > 4 | | | x <= 5: 2 (2.0) | | | x > 5: 1 (2.0) | y > 3: 2 (6.0)

x <= 3 | y <= 3 … | y > 3: 1 (6.0) x > 3 | y<=3 … | y > 3: 2 (6.0)

x <= 1: 2 (2.0) x > 1 | x <= 4 | | x <= 2: 1 (2.0) | | x > 2: 3 (4.0) | x > 4 | | x <= 5: 2 (2.0) | | x > 5: 1 (2.0)

(a) (b)

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 9: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

Figure 4. Overall control flow of SDT algorithm

Figure 5. Algorithm Build_Tree

Algorithm SDT (P, max_cls, min_obj, min_obj1, min_obj2, min_accuracy)Inputs:

• A training sample table (P) with N samples, each sample has M attributes (number ofbands of the image to classify) and a class label • Two global thresholds: number of maximum classifier (max_cls), the minimumnumber of samples needed to add a new DT to the SDT chain (min_obj)• Three thresholds local to each DT in a SDT chain: the number of samples to determinewhether the branches of a DT should be considered to stop or continue partitions (min_obj1),the minimum number of samples in a branch (min_obj2), and the percentage of the samples of a class in branches that can be considered as dominating (min_accuracy)

Output: • A chain of successive decision trees begins with tree T

1. Set loop variable i=12. Set data set D=P, tree T=NULL, tree root=NULL 3. Do while (i< max_cls)

a. Set data set D’={} b. Call T’=Build_Tree (i, D, D’,min_obj1, min_obj2, min_accuracy)c. If (T is not NULL)

i. Call Chain_Tree(T, T’)ii. T=T’

d. Else root=T’

e. If (D’==D || |D’|< min_obj) then breakf. D=D’g. i=i+1

4. If |D|>0a. Call T’=Classic_DT(D)b. Call Chain_Tree(T,T’)

5. Return root

Algorithm Build_Tree (seq, D, D’, min_obj1, min_obj2, min_accuracy) Inputs: • Seq: sequence number of the DT in the SDT chain • D’: new data set combining ill-classified samples • D, min_obj1, min_obj2, min_accuracy: same as in function SDT in Fig. 4 Output: • The seqth decision tree in the SDT chain

1. Let num_corr be the number of samples of the dominating class 2. if(|D|< min_obj1[seq])

a. If (num_corr>|D|* min_accuracy[seq]) and |D|> min_obj2[seq]) i. Mark this branch as high accuracy branch (no need for further partition) and assign the label of the dominating class to the branch ii. Return NULL

b. Else i. Mark this branch as low accuracy branch ii. Merge D into D’ iii. Return NULL

3. else a. if (num_corr>|D|* min_accuracy[seq])

i. Mark this branch as high accuracy branch (no need for further partition) and assign the label of the dominating class to the branch

ii. Return NULL //begin binary partition 4. For each of the attributes of D, find partition value using entropy_gain or gain_ratio 5. Find the partitioning attribute and its partition value that has largest entropy_gain or

gain_ratio 6. Divide D into two partitions according to the partition value of the attribute, D1 and D2 7. Allocate the tree structure to T 8. T.left_child= Build_Tree(seq+1,D1, D’, min_obj1, min_obj2, min_accuracy) 9. T.right_child= Build_Tree(seq+1,D2, D’, min_obj1, min_obj2, min_accuracy) 10. return T

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 10: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

of the dominating classes. While the purposes of setting min_obj1 and min_accuracy are clear, the purpose of setting min_obj2 is to prevent generat-ing small branches with high classification accura-cies in hope that the samples that fall within the branches can be used to generate more significant decision rules by combining with similar samples in the other branches. For example, in Figure 1(a), region 5 and region 6, although both have only two samples of the same class, can be combined to generate a more significant branch in a new decision tree as shown in Figure 1c.

Build_Tree stops under three conditions. The first condition is that the number of samples in the dataset is below min_obj1, and there is a dominating class in the dataset (the ratio of the number of dominating samples to the number of samples is greater than min_accuracy), and the number of the samples is above min_obj2, then this branch has high classification accuracy and no further classification is necessary. The second condition is that the number of samples in the dataset is below min_obj1, and either the branch does not have a dominating class or the number of samples is below min_obj2, then the samples will be sent to further classification in the next decision tree in the SDT tree chain and this tree building process is stopped. The third condition is that the number of samples in the dataset is above min_obj1 and there is a dominating class in the dataset, then this branch also has high clas-sification accuracy and no further classification is necessary. Guidelines to choose the values of these parameters are discussed in Section 4.2.

Function Chain_Tree is relatively simple (Fig-ure 6). Given a decision tree T, it recursively finds the branches that are removed due to low classifi-

Figure 6. Algorithm Chain_Tree

Algorithm Chain_Tree (T,T’) Input: • Two successive decision trees T and T’ 1. If T is a leaf node

If T is marked as a low classification confidence node i. Set T.next=T’

ii. Return 2. if(T.left_child is not NULL)

Chain_Tree(T.left_child,T’) 3. if(T.right_child is not NULL)

Chain_Tree(T.right_child,T’)

Figure 7. Algorithm Classify_Instance

Algorithm Classify_Instance (T, I) Input: • A SDT begins with decision tree T • An instance I with M attributes Output: • Class label of I 1. If T is a leaf node

a. If T is marked as a high classification confidence node i. Assign the class label of T to I

ii. Return b. Else if T is marked as a low classification confidence node

Return Classify_Instance (T.next, I) 2. Else

a. Let A be the partitioning attribute and V be the partition value b. If(I[A]<=V) then

Return Classify_Instance(T.left_child, I) c. Else

Return Classify_Instance(T.right_child, I)

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 11: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

it is impossible to present and visually examine the results and, thus, our focus on the second experiment is classification accuracy. Since the primary purpose is to compare the two methods, CDT and SDT, in terms of accuracy and inter-pretability, the presentations of final classified images are omitted.

experiment1:Landcoverdataset

The dataset is obtained from LandSat ETM+ 7 satellite and was acquired on August 31, 2002. It covers the coast area in the greater Hanoi-Red River Delta region of northern Vietnam. Six bands are used and there are six classes: mangrove, aquaculture, water, sand, ag1 and ag2. We evenly divide the 3262 samples into training and testing datasets. The training parameters are shown in Table 1 and the classification accuracies of SDT are shown in Table 2. Note that the last decision tree is a classic decision tree (c.f., Figure 4) and its parameters are set by J48 defaults. In Table 2, DT# means the sequence number of a decision tree in its SDT tree chain, “Last” means the last decision tree in the SDT tree chain. The overall accuracy is computed as the ratio of the number of correctly classified samples by all the decision trees in the SDT tree chain over the number of samples to be classified. The resulting decision trees of CDT and SDT are shown in Figure 8. The default values for the required parameters in J48 are used for constructing the CDT tree except that the minNumObj is changed from 2 to 10 to accommodate the resulting tree in one page for illustration purposes.

The overall accuracy of SDT is 90.56%, which is about 2% higher than that of CDT (88.78%). The first decision tree in the SDT tree chain, which has 12 leaf nodes (rules), generalized 67.7% of the total samples with more than 96% purity. The numbers of leaves and the tree sizes of the five decision trees in the SDT tree chain are listed in Table 3. From the table we can see that the number of leaves and the tree size in

cation accuracy (or ill-classification branch) and makes the branches pointing to the new decision tree (c.f., Figure 1).

Given the first decision tree (T) of a SDT tree chain, the algorithm for classifying an instance I is given in Figure 7 which is a combination of recursive and chain-following procedures. Starting from the root of T, it uses the partition-ing attribute and the partition value to decide whether to go to the left or right branches of T. This procedure is carried out recursively until a leaf node is reached. If the leaf node represents a branch with high classification accuracy then the class label of the branch will be assigned to the instance; otherwise the branch will point to the next decision tree in the SDT tree chain and the classification will be passed to the next decision tree by following the link.

exPerIMents

We report the experimental results from two real remote sensing image datasets: the land cover dataset and the urban change detection dataset. For each experiment, we report the data source, thresholds used in the experiment, the compari-sons of the accuracies and the interpretability of the decision trees result from J48 implementa-tion of CDT (Witten et al., 2000) and SDT. Note that we use separate datasets for training and testing when measuring classification accuracy according to practices in remotely sensed image classification.

The first dataset is relatively simple with a small number of class labels. Since its classifi-cation accuracy using CDT is already high and the space to improve classification accuracy is limited, the primary purpose of the experiment is to demonstrate the SDT’s capability to generate compact decision trees for easier interpretations. The second dataset is relatively complex with a large number of classes. Due to the complexities of the datasets and the resulting decision trees,

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 12: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

�0

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

each of the decision trees of the SDT tree chain is significantly smaller than that of the CDT deci-sion tree. Visual examinations indicate that the resulting smaller decision trees of the SDT tree chain are significantly easier to interpret than the big CDT decision tree (c.f., Figure 8). This experiment shows that while SDT may not be able to improve classification accuracies signifi-cantly when CDT already has high classification accuracy, it has the capability to generate more compact and interpretable decision trees.

experiment2:urbanchangedetectiondataset

The dataset consists of 6222 training samples and 1559 testing samples. Each sample has 12 attributes: 6 bands from a TM image during win-ter time (December 10, 1988) and 6 bands from another TM image during spring time (March 3, 1996), both from a southern China region located between 21◦ N and 23◦ N and crossed by the Tropic of Cancer (Seto & Liu, 2003). The resulting decision trees are too complex to pres-ent in this chapter due to space limitation. The parameters and classification accuracies of SDT are shown in Table 4 and Table 5, respectively. The overall accuracy of SDT is 80.24%, which is more than 4% higher than that of CDT (76.01%), a significant improvement.

Similar to experiment 1, we also list the num-bers of leaves and tree sizes of the decision trees of the SDT and CDT in Table 4. The number of leaves and tree sizes in the SDT tree are reduced even further: they are only about 1/10 of those of the CDT. Even the totals of the numbers of leaves

Table 1. SDT parameters for land cover data setDT # min_accuracy Min-NumObj1 Min-NumObj2

0 0.95 30 20

1 0.90 25 15

2 0.85 20 10

3 0.80 15 5

Table 2. SDT classification results for land cover data set

DT # #of samples to classify # of correctly classified samples Accuracy (%)

0 1089 1047 96.14

1 196 175 89.26

2 100 89 89.00

3 97 75 77.32

Last 149 91 61.07

Overall 1631 1477 90.56

Table 3. Comparisons of decision trees from SDT and CDT for land cover data set

DT # # of Leaves Tree Size

0 12 40

1 8 28

2 6 23

3 7 20

Last 11 21

CDT 35 69

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 13: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

��

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

and tree sizes in the five decision trees in the SDT tree chain are only about half of those of the CDT. While it is possible to prune a CDT to reduce its number of leaves and tree size, usually this can

only be achieved at the cost of classification ac-curacy, which is not desirable in the application context. On the other hand, the SDT approach reduces the number of leaves and tree size while increasing classification accuracy.

Figure 8. Decision trees from CDT and SDT for the land cover data set

b3 <= 41| b4 <= 24| | b4 <= 23: 3 (262.0/4.0)| | b4 > 23| | | b0 <= 108| | | | b3 <= 27: 3 (31.0/5.0)| | | | b3 > 27: 2 (16.0/1.0)| | | b0 > 108: 3 (13.0)| b4 > 24| | b1 <= 65| | | b1 <= 60: 5 (10.0)| | | b1 > 60: 1 (12.0)| | b1 > 65| | | b3 <= 36| | | | b3 <= 24: 3 (13.0)| | | | b3 > 24| | | | | b1 <= 88: 2 (247.0/12.0)| | | | | b1 > 88| | | | | | b3 <= 29: 3 (17.0)| | | | | | b3 > 29: 2 (28.0)| | | b3 > 36| | | | b2 <= 71: 1 (15.0/7.0)| | | | b2 > 71| | | | | b0 <= 107| | | | | | b2 <= 83: 2 (13.0)| | | | | | b2 > 83| | | | | | | b1 <= 86: 4 (11.0/2.0)| | | | | | | b1 > 86: 2 (12.0/1.0)| | | | | b0 > 107: 3 (10.0/3.0)b3 > 41| b5 <= 42| | b2 <= 93| | | b4 <= 61| | | | b1 <= 67| | | | | b5 <= 22: 1 (32.0)| | | | | b5 > 22| | | | | | b3 <= 74| | | | | | | b2 <= 54: 5 (35.0/7.0)| | | | | | | b2 > 54: 1 (17.0/7.0)| | | | | | b3 > 74: 1 (18.0/1.0)| | | | b1 > 67: 1 (324.0/33.0)| | | b4 > 61| | | | b2 <= 63| | | | | b3 <= 75: 3 (10.0/6.0)| | | | | b3 > 75| | | | | | b5 <= 33: 1 (14.0/2.0)| | | | | | b5 > 33| | | | | | | b1 <= 73: 5 (61.0/4.0)| | | | | | | b1 > 73: 1 (10.0/4.0)| | | | b2 > 63| | | | | b1 <= 78: 1 (30.0/7.0)| | | | | b1 > 78: 6 (10.0/3.0)| | b2 > 93| | | b2 <= 113: 4 (14.0/8.0)| | | b2 > 113: 3 (31.0)| b5 > 42| | b3 <= 64| | | b3 <= 52: 4 (57.0/2.0)| | | b3 > 52| | | | b0 <= 113: 6 (19.0/2.0)| | | | b0 > 113: 4 (23.0/1.0)| | b3 > 64| | | b3 <= 96| | | | b1 <= 78| | | | | b3 <= 84: 1 (10.0/3.0)| | | | | b3 > 84: 5 (12.0/5.0)| | | | b1 > 78: 6 (146.0/8.0)| | | b3 > 96: 5 (48.0/4.0)

b3 <= 41| b4 <= 24| | b4 <= 23: 3 (262.0/4.0)| b4 > 24| | b1 > 65| | | b3 <= 36| | | | b3 <= 27| | | | | b1 <= 85| | | | | | b3 > 26: 2 (26.0/1.0)| | | | b3 > 27| | | | | b1 > 75: 2 (215.0/5.0)b3 > 41| b5 <= 42| | b2 <= 93| | | b4 <= 61| | | | b1 <= 67| | | | | b5 <= 22: 1 (32.0)| | | | b1 > 67| | | | | b3 <= 68| | | | | | b1 <= 76| | | | | | | b2 > 61: 1 (46.0/2.0)| | | | | b3 > 68: 1 (220.0/9.0)| | | b4 > 61| | | | b2 <= 63| | | | | b1 <= 69| | | | | | b4 > 69: 5 (26.0)| | b2 > 93| | | b2 > 121: 3 (25.0)| b5 > 42| | b3 <= 64| | | b3 <= 52: 4 (57.0/2.0)| | | b3 > 52| | | | b0 > 115: 4 (21.0)| | b3 > 64| | | b3 <= 96| | | | b1 > 78| | | | | b0 <= 111: 6 (125.0/4.0)| | | b3 > 96| | | | b0 > 140: 5 (28.0)

b4 <= 32| b2 > 57| | b2 <= 113| | | b3 <= 27| | | | b0 > 101: 3 (29.0)| | | b3 > 27| | | | b3 <= 34: 2 (20.0)| | b2 > 113: 3 (26.0)b4 > 32| b0 <= 100| | b2 <= 63| | | b0 <= 83: 5 (27.0)| | | b0 > 83| | | | b5 <= 33| | | | | b1 <= 67| | | | | | b3 > 75: 1 (16.0)| | | | b5 > 33| | | | | b3 > 80| | | | | | b3 <= 98: 5 (36.0/1.0)| | b2 > 63| | | b3 > 46| | | | b3 <= 79| | | | | b1 <= 76: 1 (19.0)| b0 > 100| | b3 > 48| | | b3 <= 88| | | | b5 > 47: 6 (31.0/1.0)

b3 <= 44| b2 <= 71| | b1 > 66| | | b3 <= 34| | | | b3 <= 25: 3 (11.0)| b2 > 71| | b3 > 29| | | b2 <= 83: 2 (16.0)| | | b2 > 83| | | | b1 > 86| | | | | b2 <= 98: 2 (11.0)b3 > 44| b2 <= 95| | b1 <= 67| | | b3 <= 71| | | | b5 > 27: 5 (14.0)| | b1 > 67| | | b5 <= 37| | | | b3 <= 69| | | | | b4 <= 55| | | | | | b2 > 72: 1 (14.0)| | | | b3 > 69: 1 (25.0/2.0)

b2 <= 76| b3 <= 32| | b4 > 25| | | b0 > 91: 2 (12.0)| b3 > 32| | b3 <= 77| | | b2 > 54| | | | b1 <= 67: 1 (23.0/3.0)| | b3 > 77| | | b0 > 94: 6 (8.0)b2 > 76| b3 <= 56| | b3 <= 32| | | b0 <= 98: 2 (8.0)| | b3 > 32| | | b1 <= 94: 4 (21.0/4.0)| | | b1 > 94: 2 (8.0)| b3 > 56| | b3 > 97: 5 (10.0)

b4 <= 26: 3 (18.0/5.0)b4 > 26| b0 <= 100| | b1 <= 66| | | b2 <= 52: 5 (11.0/3.0)| | | b2 > 52: 1 (15.0/4.0)| | b1 > 66| | | b4 <= 74| | | | b0 <= 88: 3 (15.0/4.0)| | | | b0 > 88| | | | | b3 <= 40| | | | | | b2 <= 64: 3 (6.0/1.0)| | | | | | b2 > 64: 1 (8.0/3.0)| | | | | b3 > 40: 1 (51.0/24.0)| | | b4 > 74| | | | b0 <= 90: 1 (9.0/3.0)| | | | b0 > 90: 5 (15.0/6.0)| b0 > 100| | b2 <= 91: 6 (7.0)| | b2 > 91: 3 (8.0/2.0)

SDT-0

SDT-1

SDT-2

SDT-3

SDT-LastCDT

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 14: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

��

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

dIscussIon

Limitationsofsdt

The proposed SDT approach does have a few limitations. First, although the two experiments show favorable increases of classification accu-racy, there is no guarantee that the SDT can always increase classification accuracy. This is especially true when the CDT already has high classification

accuracy. In this case, the first decision tree in a SDT tree chain has the capability to generalize most of the samples. The samples fed to the last decision tree of the SDT tree chain are likely to be mostly noise samples and cannot be general-ized well by the last decision tree in the SDT tree chain. Depending on the setting of the thresholds used in SDT, the SDT may achieve lower clas-sification accuracies. However, we argue that the importance of improving classification accuracy decreases as the classification accuracy increases and the interpretability of resulting decision trees increases. In this respect, the SDT is still valuable in finding significant classification rules from the CDT which could be too complex for direct hu-man interpretation. The second major limitation is that, there are five parameters, which can affect SDTs classification accuracies and structures of decision trees in the SDT tree chain, need to be fine tuned. This will be discussed in details in the next sub-section. Finally, the SDT approach inherits several disadvantage of the CDT, such as being hungry for training samples due to its divide-and-conquer strategy. For training datasets

Table 4. SDT parameters for urban change detection data set

DT # min_accuracy Min-NumObj1 Min-NumObj2

0 0.95 50 20

1 0.85 40 15

2 0.75 30 10

3 0.65 20 5

Table 5. SDT classification results for urban change detection data set

DT # #of samples to classify # of correctly classified samples Accuracy(%)

0 968 892 92.15

1 266 160 60.15

2 171 107 62.57

3 130 86 66.15

Last 24 6 25.00

Overall 1559 1251 80.24

Table 6. Comparisons of decision trees from SDT and CDT for urban change data set

DT # # of Leaves Tree Size

0 31 104

1 22 71

2 18 61

3 25 55

Last 10 19

CDT 284 567

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 15: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

��

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

with a small number of samples but with complex classification patterns, the classification of the SDT may not be as good as those of connectionist approaches, such as neural network.

choosingParameters

There are five parameters used in the SDT ap-proach: the maximum number of classifiers (max_cls), the minimum number of samples needed to add a new decision tree to the SDT tree chain (min_obj), the number of samples to determine whether the branches of a decision tree should be considered to stop or continue partitioning (min_obj1), the minimum number of samples (min_obj2) and the percentage (min_ac-curacy) of the samples of a class in branches that can be considered as dominating (c.f., Figure 4 and Section 2.3). As pointed out, max_cls and min_obj are global and the rest three parameters are local to each of the decision trees in a SDT tree chain.

The most significant parameter might be the min_accuracy values of each decision tree in a SDT tree chain. If the first few min_accuracy values are set to high percentages, many branches in the corresponding decision trees will not be able to be qualified as high classification accuracy and samples that fall within these branches will need to be fed to the next decision trees, which in turn requires larger max_cls to deplete all significant decision rules. On the other hand, using higher min_accuracy values generate decision branches that are higher in classification accuracies and smaller in numbers.

For min_obj1 and min_obj2, it is clear that min_obj1 needs to be greater than min_obj2. The larger min_obj1, the earlier to check whether to further partition a decision tree branch. Once the number of samples is below min_obj1, the branch will be either marked as having high classification accuracy or marked as need to be processed in the next decision tree of the SDT tree chain, de-pending on min_accuracy and min_obj2. A larger

min_obj1, together with a higher min_accuracy, will let SDT to find larger decision branches that are high in classification accuracy and smaller in number, and are more likely to send samples to the next decision trees of the SDT tree chain, which in turn requires larger max_cls to deplete all significant decision rules. For example, con-sidering a dataset that consists of 100 samples and can be partitioned into 2 sections, each has 50 samples. Assume min_accuracy=0.95. If there are 94 samples of dominating classes and the two sections with each having 48 and 46 samples of dominating classes, respectively. If min_obj1 = 100, then all the samples will be sent to the next decision tree in the SDT tree chain. On the other hand, if min_obj1 = 50, then only the samples of one of the branches needs to be sent to the next decision tree in the SDT tree chain. With a re-duced min_accuracy of the next decision tree in the SDT tree chain, these samples alone may be generalized as a significant decision rule. Consider another scenario where min_accuracy = 0.90, the branch will be marked as having high classifica-tion accuracy and no samples will be sent to the next decision tree in the SDT tree chain.

The parameter min_obj2 is more related to de-termining the granularity of “noises” in a particu-lar decision tree. A smaller min_obj2 means that fewer branches, the samples of which are almost of the same class (>min_accuracy) but are small in size, will be considered as unclassifiable in the current decision tree and sent to the next decision tree in the SDT tree chain. This also means that the number of decision trees in the SDT chain is smaller but the number of branches in each of the DTs is larger. Some of the bottom level branches generalize only a small number of samples.

The two global parameters, min_obj and max_cls, are used to determine when to terminate the SDT algorithm. They play less significant roles than min_obj1, min_obj2 and min_accuracy. If min_obj is set to smaller values, the first one or two decision trees will be able to generalize most of the samples into decision rules and no significant

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 16: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

��

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

decision rules can be generalized from the samples combined from their ill-classified branches (i.e., terminate condition 3). In this case, the SDT al-gorithm terminates but does not involve min_obj and max_cls. Most likely, min_obj is involved in terminating the SDT algorithm only when most of the samples are generalized by the previous decision trees and only very few samples are needed to be sent to the next decision tree in the SDT chain while max_cls has not been reached yet (i.e., terminate condition 2). Max_cls becomes a constraint only when users intend to generate fewer rules but with high classification accuracies by using larger min_obj, min_obj2 and min_ac-curacy values (i.e., terminate condition 1).

Finally we provide the following guidelines in setting the parameter values based on our experiences.

• max_cls: 5-10• min_obj: min{50, 5% of number of training

samples}• For two successive decision trees in the SDT

chain, min_obj1[i]>min_obj1[i+1], min_obj2[i]>min_obj2[i+1], min_accuracy[i]> min_accuracy[i+1]. We recommend us-ing min_obj1[i+1]=0.8* min_obj1[i], min_obj2[i] =0.8*min_obj2[i+1] and min_accuracy[i+1]= min_obj1[i]-5% as the initial values for further manual adjustments.

• For each of the decision trees in the SDT chain, min_obj1> min_obj2. We recommend using min_obj1=2.5*min_obj2 as the initial values for further manual adjustments.

suMMAryAnd concLusIon

In this study we proposed a successive decision tree (SDT) approach to generating decision rules from training samples for classification of re-motely sensed images. We presented the algorithm and discussed the selection of parameters needed for SDT. The two experiments using ETM+ land

cover dataset and TM urban change detection da-taset show the effectiveness of the proposed SDT approach. The classification accuracy increases slightly in the land cover classification experiment where the classification accuracies are already high for CDT. The classification accuracy in the ur-ban change detection experiment increases about 4% which is considerably significant. In addition, in both experiments, each of the decision trees in the SDT chains is considerably more compact than the decision trees generated by CDT. This gives users an easier interpretation of classifica-tion rules and may possibly associate machine learning rules with physical meanings.

references

Chan, J. C. W., Chan, K. P., & Yeh, A. G. O. (2001). Detecting the nature of change in an urban environment: A comparison of machine learning algorithms. Photogrammetric Engineering and Remote Sensing, 67(2), 213-225.

De Fries, R. S., Hansen, M., Townshend, J. R. G., & Sohlberg, R. (1998). Global land cover clas-sifications at 8 km spatial resolution: The use of training data derived from Landsat imagery in decision tree classifiers. International Journal of Remote Sensing, 19(16), 3141-3168.

Eklund, P. W., Kirkby, S. D., & Salim, A. (1998). Data mining and soil salinity analysis. Inter-national Journal of Geographical Information Science, 12(3), 247-268.

Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119-139.

Friedl, M. A., & Brodley, C. E. (1997). Decision tree classification of land cover from remotely sensed data. Remote Sensing of Environment, 61(3), 399-409.

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 17: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

��

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

Friedl, M. A., Brodley, C. E., & Strahler, A. H. (1999). Maximizing land cover classification ac-curacies produced by decision trees at continental to global scales. IEEE Transactions on Geoscience and Remote Sensing, 37(2), 969-977.

Friedl, M. A., McIver, D. K., Hodges, J. C. F., Zhang, X. Y., Muchoney, D., Strahler, A. H., et al. (2002). Global land cover mapping from MODIS: Agorithms and early results. Remote Sensing of Environment, 83(1-2), 287-302.

Huang, X. Q., & Jensen, J. R. (1997). A machine-learning approach to automated knowledge-base building for remote sensing image analysis with GIS data. Photogrammetric Engineering and Remote Sensing, 63(10), 1185-1194.

Lawrence, R. L., & Wright, A. (2001). Rule-based classification systems using classification and regression tree (CART) analysis. Photogram-metric Engineering and Remote Sensing, 67(10), 1137-1142.

Pal, M., & Mather, P. M. (2003). An assessment of the effectiveness of decision tree methods for land cover classification. Remote Sensing of Environment, 86(4), 554-565.

Qi, F., & Zhu, A. X. (2003). Knowledge discovery from soil maps using inductive learning. Inter-national Journal of Geographical Information Science, 17(8), 771-795.

Quinlan, J. (1993). C4.5: Programs for machine learning. San Mateo, CA: Morgan Kaufmann.

Seto, K. C., & Liu, W. G. (2003). Comparing ARTMAP neural network with the maximum-likelihood classifier for detecting urban change. Photogrammetric Engineering and Remote Sens-ing, 69(9), 981-990.

Witten, I. H., & Frank, E. (2000). Data mining: Practical machine learning tools with Java imple-mentations. San Francisco: Morgan Kaufmann.

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

Page 18: In Knowledge Discovery and Data Mining: …jzhang/papers/JZhang_Mining...In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112

��

A Sucessive Decision Tree Approach to Mining Remotely Sensed Image Data

Section VData Mining and

Business Intelligence

In Knowledge Discovery and Data Mining: Challenges and Realities. Edited by X. Zhu and I. Davidson. pp 98-112