image fusion for land cover change detection
TRANSCRIPT
This article was downloaded by: [Nanyang Technological University]On: 04 November 2014, At: 22:23Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
International Journal of Image and DataFusionPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/tidf20
Image fusion for land cover changedetectionYu Zeng a , Jixian Zhang a , J.L. van Genderen b & Yun Zhang ca Chinese Academy of Surveying and Mapping , 28 LianhuachixiRoad, Beijing 100830, P.R. Chinab Faculty of Geo-Information Science and Earth Observation(ITC), University of Twente , PO Box 6, 7500 AA Enschede, TheNetherlandsc Department of Geodesy and Geomatics Engineering , Universityof New Brunswick , Fredericton, New Brunswick, Canada E3B 5A3Published online: 18 May 2010.
To cite this article: Yu Zeng , Jixian Zhang , J.L. van Genderen & Yun Zhang (2010) Image fusionfor land cover change detection, International Journal of Image and Data Fusion, 1:2, 193-215, DOI:10.1080/19479831003802832
To link to this article: http://dx.doi.org/10.1080/19479831003802832
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the“Content”) contained in the publications on our platform. However, Taylor & Francis,our agents, and our licensors make no representations or warranties whatsoever as tothe accuracy, completeness, or suitability for any purpose of the Content. Any opinionsand views expressed in this publication are the opinions and views of the authors,and are not the views of or endorsed by Taylor & Francis. The accuracy of the Contentshould not be relied upon and should be independently verified with primary sourcesof information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever orhowsoever caused arising directly or indirectly in connection with, in relation to or arisingout of the use of the Content.
This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
International Journal of Image and Data FusionVol. 1, No. 2, June 2010, 193–215
Image fusion for land cover change detection
Yu Zenga*, Jixian Zhanga, J.L. van Genderenb and Yun Zhangc
aChinese Academy of Surveying and Mapping, 28 Lianhuachixi Road, Beijing 100830,P.R. China; bFaculty of Geo-Information Science and Earth Observation (ITC),
University of Twente, PO Box 6, 7500 AA Enschede, The Netherlands; cDepartment ofGeodesy and Geomatics Engineering, University of New Brunswick, Fredericton,
New Brunswick, Canada E3B 5A3
(Received 30 August 2009; final version received 15 November 2009)
Image fusion is an effective approach for enriching multi-source remotely sensedinformation. In order to compensate the insufficiency of single-source remotesensing data during the change detection process, and to combine the comple-mentary features from different sensors, this article presents the results ofdifferent temporal synthetic aperture radar (SAR) and optical image fusionalgorithms for land cover change detection. First, pixel-level image fusion isperformed, and its applicability for change detection is assessed by a quantitativeanalysis method. Second, change detection at the decision-level is put forward,which comprises object-oriented image information extraction from high-resolution optical image, multi-texture feature and support vector machines(SVM)-based information extraction from single band and single polarisationSAR image, and hard- and soft-decision based change detection. Changedetection uncertainty is also evaluated at the scale of pixel using the extendedprobability vector and probability entropy model. The imagery used in this imagefusion research was SPOT5 and RADARSAT-1 SAR data.
Keywords: hard-decision; soft-decision; texture analysis; grey level co-occurrencematrix; fractal
1. Introduction
Image fusion is an effective approach for enriching multi-source remotely sensedinformation (Hall 1992, Pohl and van Genderen 1998). When images with a similaracquisition time are used, the expected result is to obtain a fused image that retains thespatial resolution from the panchromatic image and colour content from the multi-spectralimage; when images with different dates are used, the main purpose is to detect the changesover a period of time. In the case that image fusion is used for the latter, most previousstudies have used the data from the sensors with the same working mode, for example,different temporal optical images from the same or different sensors have been used.Optical images and data from synthetic aperture radar (SAR) sensor are complementaryin terms of capability of data acquisition and image characteristics. When they are usedtogether, the deficiency of single remote sensing data source during change informationextraction can be compensated, and the complementary features from different sensors
*Corresponding author. Email: [email protected]
ISSN 1947–9832 print/ISSN 1947–9824 online
� 2010 Taylor & Francis
DOI: 10.1080/19479831003802832
http://www.informaworld.com
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
can be combined. To address this need, a study of land cover change detection by fusionof different temporal SAR and optical imagery has been carried out.
First, a series of pixel-based image fusion experiments with the purpose of changedetection were conducted and a quantitative evaluation method is presented. Based onthe abovementioned analysis, a decision-based image fusion methodology for changedetection is put forward. In order to realise a high accuracy change information extractionat the decision-level, land cover classification for these two types of imagery wereperformed, respectively, while taking the different imaging mechanisms and informationcharacteristics into account. During this process, uncertainty of the classification resultsand the change detection result at the scale of pixel are also analysed and evaluated.
2. Study area and data
SPOT5 imagery has a spatial resolution of 2.5m; it can supply abundant data for large-scale image mapping and environmental monitoring. This makes it the major remotesensing data source for large-scale land use survey and land use information systemupdated by the Ministry of Land and Resources of China. Therefore, choosing a SPOT5image as the optical image data source for land use/land cover change detection hasa practical significance. At the time when this research was carried out, the operationalradar satellite systems were: the European ERS-1/2 and ENVISAT-1, the CanadianRADARSAT-1 and the Japanese ALOS PALSAR. Because RADARSAT-1 can providesteady data and its fine beam image has higher spatial resolution, it was selected as theSAR data source for this research.
Test site is located at Jinnan district, Tianjin, China. This area has experienced rapideconomic development. A SPOT5 Pan/XS image acquired on 16 October 2004, and aRADARSAT-1 fine beam image acquired on 19 October 2005 were used for this study.Additionally, a digital elevation model (DEM), topographic maps, land use maps, annualland use change investigation data, fieldwork results, etc. were collected as auxiliary data.
3. Methods
3.1 Pixel-level image fusion for change detection and the evaluation
Pixel-based image fusion methods are the most widely used methods in the field of imagefusion, especially for optical imagery. Data co-registered with sub-pixel accuracy aremergedwith each other. When compared with feature-based and decision-based image fusionmethods, these methods can make the best use of the original imagery and well retain itsdetailed information. Based on this fact, pixel-based image fusion analysis with the purposeof change detection was carried out using SPOT5 image and RADARSAT-1 image.
Speckle suppression was applied on the RADARSAT-1 image before the image fusion.Fifteen image fusion algorithms were tested: image difference, multiplicative, intensity-hue-saturation (IHS) transformation, Brovey transformation, colour fidelity transforma-tion, weighted fusion, smoothing filter-based intensity modulation (SFIM), block-basedsynthetic variable ratio (Block-SVR), high-pass filtering (HPF), wavelet theory basedfusion, multi-band principal component (PC) transformation, principal componentanalysis (PCA) differentia, differentia PCA, pseudocolour composition and componentsubstitution. Algorithms selected here cover almost all commonly used image fusionalgorithms applicable for change detection, where image difference is performed based
194 Y. Zeng et al.
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
on the panchromatic band of the SPOT5 image and the RADARSAT-1 image, PCAdifferentia and component substitution is performed based on RADARSAT-1 image andthe fusion result of SPOT5 pan and XS, pseudocolour composition is performed based onSPOT5 pan, SPOT5 XS and RADARSAT-1 image, and the remaining algorithms areperformed based on the SPOT5 XS and RADARSAT-1 images. In terms of the bandselection for the algorithms that allow three multi-spectral input bands to be fused(e.g. IHS, Brovey), according to the Optimum Index Factor (OIF) developed by Chavezet al. (1982), band 4(R)1(G)3(B) of the SPOT5 image with the maximum OIF valuewas selected. This combination has the additional benefit that it can give the effect of atrue colour composition.
Different from the evaluation methods using mean, deviation, entropy, mean gradient,correlation coefficient, and so on for assessing image fusion algorithms with the purposeof information enhancement, this article proposes an evaluation method for image fusionalgorithms with the purpose of change detection. This method integrates spectral featuresand spatial texture features which constitute the most important visual content of animage. The main idea of this method is to compare the image similarity between theregions where changed parcels are located and the regions where there are no changesusing the similarity measure. If the difference is big enough, it means that the changed areacan be extracted using a template for further analysis; otherwise, it implies that the fusionalgorithm goes against change detection. The similarity measure is calculated according tothe distance of the integrated feature vector F between two images to be compared, where,F¼ {Fspectral,Ftexture}. F can be further rewritten as F31� 1¼ {x1, x2, x3, . . . , x9, y1, y2,y3, . . . , y24}
T, where x1–x9 are the mean, median and standard deviation (SD) of eachband, which are selected to represent the spectral feature of the fused image; y1–y24 themean and SD of homogeneity, contrast, entropy and correlation of each band derivedfrom grey level co-occurrence matrix (GLCM) and selected to represent the texturalfeature of the fused image. GLCM is an effective texture analysis method, andomnidirectional textural features are used by considering the input image characteristics.In order to make the similarity measures comparable, interior normalisation is employedwithin each feature component by Gaussian normalisation, and then exterior normal-isation is employed between spectral feature and textual feature by extremum normal-isation. After normalisation, the similarity measure between image Q and I can bedefined as:
DðQ, I Þ ¼ �spectraldspectralðQ, I Þ þ �texturedtextureðQ, I Þ ð1Þ
where dspectral and dtexture are the Euclidian spectral distance and textural distance betweenQ and I, and �spectral and �texture the weight.
3.2 Land cover classification
In decision-level image fusion, accurate information obtained from each input image is thebasis for further joint decision. To address this need, the studies on land cover informationextraction from each type of imagery are carried out in this section.
3.2.1 Object-oriented image analysis for SPOT5 image
While high spatial resolution imagery provides more detailed information on groundobjects, it increases the intra-class spectral variability. Thus the traditional pixel-based
International Journal of Image and Data Fusion 195
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
classification approaches are no longer applicable, and object-oriented image analysistechniques have shown their effectiveness under these circumstances. In object-orientedimage analysis, spectral information, shape, size, texture as well as contextual informationcan be utilised together to perform information extraction at the level of objects(eCognition 4.0 User Guide 2002, Benz et al. 2004). For this technique, Definiens imageanalysis software provides sound solutions. An object-based approach to image analysis iscomposed of four steps:
(1) multi-resolution segmentation to generate image objects and to create objecthierarchy;
(2) image object feature extraction and parameter assessment;(3) classification which uses iterative steps to classify image objects; and(4) accuracy analysis and evaluation.
3.2.2 Multi-textural analysis for RADARSAT-1 image based on support vector machines
Texture is an inherent spatial characteristic of an image. Because SAR backscatter issensitive to the type, orientation, homogeneity and spatial relationship of ground objects,it represents certain texture features in the image. Due to the influence of speckle noise,and limited information in single band and single polarisation SAR imagery, texture playsan important role for class discrimination (Guo 2000). Nowadays, there are four groups oftexture analysis methods (Tuceryan and Jain 1993), which are statistical, geometrical,model-based and signal processing. Each method has its own characteristics andcapabilities, and there is no general agreement on an overall best analysis method,which outperforms all the others on various tasks. Among these methods, the statisticalmethods based on GLCM appear to be the most commonly used and are the mostpredominant; while they use spatial correlated characteristic of grey values for texturedescription, they are not sensitive to SAR speckle noise (Soh and Tsatsoulis 1999, Clausi2000, Franklin 2001, Maillard 2003, Clausi and Yue 2004). The fractal model is a model-based method which makes use of the self-similarity of complex phenomena that occurin nature and has specific capability in the description of spatial structure informationand detailed texture features. Additionally, it takes multi-scale effects of spatial patternsinto consideration (Chaudhuri and Sarkar 1995). In view of this, texture features derivedfrom GLCM and fractal model were studied and combined together for SAR imageryinformation extraction. In order to keep the original texture information, specklesuppression is not advised in SAR image before texture analysis (Clausi 2000, ERDASField Guide 2005).
Several variables including the number of quantisation levels, the number and typeof measurements, the window size to analyse, the pixel pair sampling distance andorientations need to be considered in order to properly use the GLCM-based method forimage texture analysis. Different parameter selection and combination lead to differenttexture features and different classification accuracy. In this research, a 64-levelquantisation was adopted because of its computational efficiency and sufficiency fortexture mapping, and directional invariant texture measures which are the average amongtexture measures for four directions (0�, 45�, 90� and 135�) were used. Different groundobjects having different scales determine textures. In the GLCM-based method, thewindow size to be processed determines the ability to capture the texture features atdifferent spatial extents. In general, a smaller window size could be easily influenced by
196 Y. Zeng et al.
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
SAR noise, while it can describe small texture features; a larger window size could describe
the whole scenery better, is not easily influenced by SAR noise, but cannot describe small
texture features. Therefore, a method to extract multi-scale texture features is proposed
in this research, which is composed of two steps:
(1) selection of the number and type of measurements to be analysed based on the
commonly used seven statistics and(2) feature image selection for different scale ground objects. Here, the semi-variogram
model is introduced to assist the estimation of processing window size.
In the first step, classify the texture measurements into three groups according to the
structure they reveal and their inter-feature correlations. The first group contains
homogeneity, angular second moment and entropy, which are the homogeneity statistics.
The second group contains SD, contrast and dissimilarity, which measure the degree of
smoothness of the texture. Within each group, features are highly correlated. The third
group contains only correlation statistics, which is an independent measure and not
correlated with any of the other textual statistics. Second, choose the ‘stable’ statistics,
which is not sensitive to pixel pair sampling distance under a given statistical window
size. Entropy, SD and correlation are selected after the abovementioned analysis.
Different texture features have their own interpretation ability for different ground
objects, and different ground objects have a different scale. In the second step, by
experiments and with reference to the estimated scale of ground objects by using the semi-
variogram, entropy processed by window size 13 was selected for recognition of residential
area, SD processed by window size 21 was selected for recognition of water body and
bare land, and correlation processed by window size 11 was selected for recognition of
vegetation.Fractal dimension is the key parameter describing a fractal surface. Among the
methods for the computation of fractal dimension, the differential box-counting (DBC)
model (Chaudhuri and Sarkar 1995) was selected in this research because of its accuracy
and the capability to cover the full dynamic range of fractal dimension. However, due to
the fact that fractals in the natural world are not strict fractals in mathematical terms,
whilst they present an approximate self-similarity in statistics, many images with obviously
different textures have a close fractal dimension. Consequently, multi-fractal analysis
and second-order statistic lacunarity were further studied as supplements to fractal
dimension. Multi-fractal dimension and lacunarity are derived from the box counting
algorithms. In this research, using image samples of typical ground objects, and by plotting
the multi-fractal q–D(q) curve and the lacunarity L–C(L) curve, parameters for
extracting multi-fractal feature image and lacunarity feature image were quantitatively
determined. When q¼ 8 and �8, there is a good separability among ground objects for
multi-fractal feature, and when L¼ 2, there is a good separability among ground objects
for lacunarity.Using small training samples, support vector machines (SVM) can produce reliable
classifications when the feature space is nonlinear and high-dimensional. In addition,
unlike spectral features, texture features do not necessarily have normal distributions
(Duda et al. 2001). Hence, the nonparametric classifier SVM was selected for SAR texture
analysis, where the radial basis function (RBF) was chosen as the kernel function and the
one-against-one technique was adopted. Multi-scale GLCM features and fractal model-
based features were incorporated and analysed by SVM.
International Journal of Image and Data Fusion 197
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
3.3 Decision-level image fusion for change detection and the uncertainty
3.3.1 Soft-decision change detection based on rules
Decision-level is the highest image fusion level, where independent information ordeclarations of identity acquired from each sensor are combined via a fusion process.For change detection, decision-level image fusion can avoid the normalisation processrequired when using different temporal images; it can not only determine the spatial extentof changes, but can also provide ‘from–to’ information of change types. The limitationof this method is that individual classification is needed and the change detection result isdirectly determined by classification results. By traditional post-classification comparison,change is often overestimated because of error propagation (Fuller et al. 2003, Gallego2004). In China, land use/cover change usually occurs at the urban fringe areas. With thedevelopment of the economy and population growth, the Chinese government puts mostemphasis of land use change monitoring on urban expansion and farmland reduction.Based on the research focus, a soft-decision approach based on the rules for changedetection is proposed in this study. It takes the status of pixels, which are changedor unchanged, change trajectory, as well as shape, size and spatial location of changesinto consideration, to decrease change overestimation by evaluating the rationality of thedetected changes. For class Ci, here, C1¼ ‘built-up area’, C2¼ ‘water’, C3¼ ‘vegetation’,C4¼ ‘bare land’, Num denotes the number of detected changes; and TðCi, Cj Þ the changetrajectory. Let N refers to the case that ‘there is no change’, W the ‘wrong classificationand applying masking’ and Y the ‘correctly detected changes’. Series of logic rulesemployed in turn are:
Rule 1: if Num ¼ 0 then N:Rule 2: if Num ¼ 1 and TðCi, Cj Þði 6¼ 1; i 6¼ j Þ then Y:Rule 3: if Num ¼ 1 and TðC1, Cj Þ ð j 6¼ 1Þ then W:Rule 4: River, lake, canal and its affiliated works (including built-up area and vegetation)
are regarded as N.Rule 5: Land use types in between large area farmlands are regarded as N.Rule 6: Isolated 3� 3 detected change regions are regarded as W.
Rule 1 means that if the pixel is classified as the same land cover type in the two dates,the pixel is regarded as correctly classified and there is no change. At urban fringe areas,most land use/cover changes are caused by urban growth. Thus, change to built-up areafrom other land use/cover types can be regarded irreversible. Rules 2 and 3 are establishedbased on this assumption. Rule 2 implies that if the change is not from built-up areato other types, it is regarded correctly detected; rule 3 indicates that if the change frombuilt-up area to other types is detected, the change is unlikely to have happened and pixelsare masked for further analysis. The meaning of rules 4–6 is self-explanatory; they areperformed by spatial analysis. By separating the unchanged areas, falsely detected changesand possible changes, the change overestimation can be reduced, and accordingly, thechange detection accuracy is improved.
3.3.2 Uncertainty in the change detection result
Understanding the nature and spatial distribution of uncertainty when analysing changedetection results can reduce the risk of making wrong decisions based on uncertain data(Shi and Ehlers 1996). Uncertainty of change in the detection result is the propagation
198 Y. Zeng et al.
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
of uncertainty of classification results. When there are limited training samples, we develop
the idea of ‘extended probability vector’ (Xu and Krzyzak 1992, Bo and Wang 2003),
then extend the uncertain evaluation method which was originally based on ‘probability
vector’ generated during maximum likelihood (ML) classification (Foody et al. 1992,
Goodchild et al. 1992, Shi and Ehlers 1996) to object-oriented fuzzy classification and
SVM-based classification. This method is briefly described as follows.For object-oriented fuzzy classifier, a fuzzy membership vector, which is resembling
‘probability vector’, is first created based on the membership of each pixel x to each
class Ci:
½mðC1=xÞ,mðC2=xÞ, . . . ,mðCi=xÞ, . . . ,mðCM=xÞ�T
ð2Þ
Unlike posterior probabilities generated during ML classification indicate the
probability that a pixel belongs to a class, fuzzy memberships generated during fuzzy
classification present the possibility that a pixel belongs to a class. Therefore, the
following transformation is applied to (1) to make it meet the requirements of probability
definition:
pmðCi=xÞ ¼mðCi=xÞPMi¼1 mðCi=xÞ
ð3Þ
where M is the number of classes. The ‘extended probability vector’ can then be
constructed:
½ pmðC1=xÞ, pmðC2=xÞ, . . . , pmðCi=xÞ, . . . , pmðCM=xÞ�T
ð4Þ
For SVM using one-against-one technique to tackle multi-class division, a vote vector
is created according to the votes of each pixel obtained for each class. The ‘extended
probability vector’ is then constructed by applying the transformation (3). After that,
the classification uncertainty for each pixel is measured by probability entropy, which can
be derived from ‘extended probability vector’:
Hð pÞ ¼ �XM
i¼1
pðCi=xÞ� log2 pðCi=xÞ ð5Þ
Different temporal image classifications can be regarded as independent. In other
words, the posterior probability vector of a pixel at time T2 is calculated irrespective of the
class or feature vector at the previous time T1. Thus, according to Shannon’s information
theory, there is
H ¼ �XM
j¼1
XM
i¼1
Pij log2ðPijÞ ¼ �XM
i¼1
PðCi,T1=XT1Þ log2ðPðCi,T1=XT1ÞÞ
�XM
j¼1
PðCj,T2=XT2Þ log2ðPðCj,T2=XT2ÞÞ ð6Þ
Equation (6) indicates that change detection uncertainty is the sum of classification
uncertainty in the two dates. The range of H is from 0 to log2ðMMÞ, which indicates
uncertainty varies from absolute certain to absolute uncertain.
International Journal of Image and Data Fusion 199
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
4. Results and discussion
4.1 Applicability analysis on pixel-based image fusion for change detection
Among the 15 algorithms mentioned above used for pixel-level image fusion, sevenrepresentative algorithms, namely: PC transformation, IHS transformation, Broveytransformation, HPF, wavelet fusion, SFIM and multiplicative were selected forquantitative analysis. With reference to the ground truth data, calculate the similaritydistance between the parcels where land cover change occurs and parcels where there is noland cover change, as well as the similarity distance between the changed parcels andtypical land cover regions, e.g. farmland, built-up area and water body on the fused imageusing the proposed method. Typical land cover change trajectories in the test areaas shown in Table 1 were analysed in this research. An example is given for analysis onwavelet fusion result (Figure 1 and Tables 2 and 3).
When we look at the fused result in Figure 1(b), we can see that by visualinterpretation, it is hard to separate the changed parcels (highlighted in red) from theunchanged regions. From Table 3, we can see that the similarity distance between thechanged parcel and its unchanged neighbours is quite close, which is 0.354, 0.064 and0.251, respectively, whilst the similarity distance between the changed parcel and therepresentative farmland, built-up area and water body is 1.831, 1.435 and 0.802,respectively, in this example. In comprehensive comparison on the similarity distancesamong the image parcel samples, we found that only when the distance is greater than 1.0,there exist obvious differences between two parcels. It implies that on the fused image,around the changed parcels, there are many unchanged regions with similar imagefeatures. It further indicates that it is hard to extract the changed regions employingother techniques, such as template analysis, etc. The same conclusion can be reached byanalysing other land cover change trajectories.
Experimental results showed that quantitative analysis verified the judgement receivedby visual interpretation; for most land cover change trajectories, it is difficult to locate thechanged parcels on the fused image. It is noted that for these two types of data, imagefusion at this level is not applicable for change detection. Based on the above analysis,a higher image fusion processing level, decision-level, is then put forward for analysis,which comprises the following sections.
4.2 Information extraction from SPOT5 image
In the object-oriented classification, both SPOT5 pan and SPOT5 XS wereused. A Normalized Difference Vegetative Index (NDVI) image was produced as
Table 1. Typical land cover change trajectory.
From To
Water body Bare landGrassland Bare landFarmland Built-up areaGrassland Built-up areaBare land Built-up area
200 Y. Zeng et al.
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
an additional band. In order to reduce information loss, smoothing, filtering and imagefusion were not conducted. A two-level classification scheme was adopted, which is listedin Table 4. Land use maps, annual land use change investigation data and filed surveyresults were used for accuracy evaluation.
After experiments, a network of three layers is constructed according to features ofground objects. The setting of parameters is given in Table 5. In each image object layer,spectral features are first used for object separation; for objects with close spectral features,shape, texture, and contextual information are further used.
The classified result is illustrated in Figure 2. By accuracy evaluation, for thesecond-level classification, the overall accuracy is 88.53% with Kappa coefficient0.861; for the first-level classification, the overall accuracy is 90.19% with Kappacoefficient 0.872.
Figure 1. Original images and the fused result: (a) SPOT5 pan, 2004; (b) SPOT5 XS (4R/1G/3B),2004; (c) RADARSAT-1, 2005; and (d) wavelet fusion result (Daubechies wavelet, two-leveldecomposition) superimposed by ground truth of changed parcels in red (annual land use changeinvestigation data).
International Journal of Image and Data Fusion 201
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
Table
2.Interiornorm
alisationforeach
feature
componentin
thechangetrajectory
from
waterto
bare
land.
Parcel
type
Image
parcel
Size
(width�
height)
Band
no.
Spectralfeature
Norm
alisedspectralfeature
Mean
Median
SD
Mean
Median
SD
Changed
58�56
15.763
4.000
7.713
0.410
0.406
0.478
226.744
26.000
10.540
0.414
0.414
0.498
361.481
61.000
7.613
0.389
0.384
0.499
Unchanged
123�23
12.837
3.000
2.446
0.400
0.402
0.387
216.970
17.000
2.882
0.387
0.389
0.389
356.140
56.000
2.462
0.347
0.342
0.403
Unchanged
224�23
14.984
4.000
7.350
0.408
0.406
0.472
235.839
35.000
6.117
0.439
0.438
0.435
377.080
77.000
4.651
0.512
0.518
0.444
Unchanged
330�23
13.390
3.000
4.287
0.402
0.402
0.419
226.713
26.000
4.995
0.414
0.414
0.419
356.935
57.000
2.318
0.353
0.351
0.400
202 Y. Zeng et al.
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
Farm
land
75�75
1106.759
106.000
9.216
0.785
0.791
0.504
2172.590
172.000
11.979
0.811
0.815
0.519
393.557
93.000
7.322
0.642
0.652
0.494
Built-uparea
65�68
182.873
79.000
29.784
0.697
0.689
0.862
2115.881
112.000
35.742
0.656
0.650
0.859
3113.030
109.000
27.101
0.796
0.786
0.865
Waterbody
63�67
12.685
3.000
1.909
0.399
0.402
0.378
214.078
14.000
2.388
0.379
0.381
0.381
370.731
71.000
2.125
0.462
0.468
0.396
(continued
)
International Journal of Image and Data Fusion 203
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
Table
2.Continued.
Texturalfeature
Norm
alisedtexturalfeature
Homogeneity
Contrast
Entropy
Correlation
Homogeneity
Contrast
Entropy
Correlation
Imageparcel
Mean
SD
Mean
SD
Mean
SD
Mean
SD
Mean
SD
Mean
SD
Mean
SD
Mean
SD
0.324
0.159
22.874
58.214
1.972
0.320�2.945
7.039
0.594
0.765
0.417
0.509
0.369
0.793
0.593
0.421
0.258
0.121
23.094
41.821
2.120
0.117�47.176
145.695
0.537
0.590
0.395
0.464
0.554
0.478
0.241
0.742
0.291
0.132
24.519
46.168
2.138
0.103�95.450
257.232
0.698
0.670
0.311
0.423
0.604
0.440
0.165
0.840
0.282
0.125
24.939
15.403
1.999
0.176�1.682
2.175
0.518
0.484
0.421
0.337
0.422
0.506
0.607
0.399
0.249
0.108
30.517
19.439
2.079
0.142�8.257
9.878
0.517
0.408
0.423
0.362
0.426
0.572
0.625
0.386
0.210
0.119
94.911
63.224
2.067
0.147�4.972
8.977
0.487
0.529
0.653
0.538
0.378
0.630
0.611
0.400
0.414
0.126
10.261
41.346
1.963
0.208�5.972
6.989
0.758
0.496
0.387
0.441
0.353
0.571
0.557
0.421
0.334
0.122
13.685
38.786
2.076
0.165�17.526
25.771
0.705
0.600
0.360
0.450
0.416
0.663
0.534
0.428
0.309
0.117
17.046
39.627
2.082
0.126�19.304
29.616
0.747
0.510
0.274
0.379
0.425
0.536
0.540
0.436
0.327
0.129
10.713
5.919
2.020
0.164�3.252
3.543
0.599
0.518
0.388
0.299
0.463
0.483
0.589
0.405
0.336
0.119
8.502
5.328
2.071
0.139�15.007
17.896
0.709
0.561
0.341
0.297
0.402
0.559
0.559
0.407
0.180
0.121
51.418
25.818
2.070
0.147�4.354
4.167
0.408
0.554
0.442
0.286
0.388
0.627
0.614
0.391
204 Y. Zeng et al.
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
0.146
0.090
67.184
50.834
2.176
0.059�25.490
55.804
0.271
0.194
0.520
0.479
0.761
0.274
0.330
0.645
0.144
0.092
67.506
51.688
2.179
0.054�16.023
27.304
0.284
0.182
0.561
0.509
0.736
0.233
0.548
0.432
0.131
0.082
86.518
66.086
2.184
0.045�18.815
35.385
0.279
0.141
0.613
0.557
0.753
0.192
0.543
0.447
0.215
0.126
62.836
104.212
2.151
0.096�36.826
93.865
0.396
0.496
0.510
0.693
0.714
0.347
0.198
0.819
0.173
0.113
85.417
114.296
2.164
0.078�41.092
145.248
0.348
0.474
0.627
0.796
0.690
0.324
0.301
0.741
0.203
0.121
64.583
103.557
2.151
0.097�47.693
121.089
0.468
0.557
0.506
0.809
0.646
0.413
0.401
0.599
0.198
0.132
212.133
116.171
1.998
0.186�0.186
0.183
0.364
0.547
0.857
0.741
0.419
0.527
0.625
0.390
0.197
0.128
129.546
76.231
2.031
0.167�1.431
1.660
0.400
0.685
0.792
0.622
0.277
0.670
0.693
0.364
0.182
0.120
104.687
58.661
2.044
0.155�1.979
2.124
0.413
0.539
0.701
0.507
0.306
0.662
0.626
0.388
International Journal of Image and Data Fusion 205
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
Table
3.Exteriornorm
alisationandsimilarity
calculationforsample
parcelsin
thechangetrajectory
from
waterto
bare
land.
Changed
parcel
Unchanged
parcel
1Unchanged
parcel
2Unchanged
parcel
3Farm
land
Built-up
area
Water
body
Spectraldistance
0.186
0.204
0.149
0.862
1.007
0.222
Norm
alisedspectraldistance
0.043
0.064
0.000
0.831
1.000
0.085
Texturaldistance
1.104
0.894
1.065
1.573
1.189
1.381
Norm
alisedtexturaldistance
0.310
0.000
0.251
1.000
0.435
0.717
Sim
ilarity
distance
0.353
0.064
0.251
1.831
1.435
0.802
206 Y. Zeng et al.
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
4.3 Information extraction from RADARSAT-1 image
Seven texture features were constructed, which are:
Feature 1: Correlation (processed by window size 11);Feature 2: SD (processed by window size 21);Feature 3: Entropy (processed by window size 13);Feature 4: Fractal dimension;Feature 5: Multi-fractal dimension (q¼ 8);Feature 6: Multi-fractal dimension (q¼�8); andFeature 7: Second-order statistic lacunarity (L¼ 2).
From the correlation matrix (Table 6), we can see that the texture feature has a lowcorrelation coefficient with respect to each other. It indicates that they are independent,and they can function jointly for further analysis.
Set features 1–7 as input of SVM, after taking samples, classify RADARSAT-1 imageinto built-up area, vegetation, water body and bare land. RBF was chosen as the kernelfunction of SVM, with penalty parameter C¼ 100, and �¼ 0.25. The classification resultis shown in Figure 3. Comparison on classification performance of different features wasperformed as presented in Tables 7 and 8.
From Tables 7 and 8, we can see that with overall accuracy of 69.8926% and Kappacoefficient of 0.4916, the integrated seven texture features provide better classificationaccuracy than any other feature combinations, and the improvement is significant,which is 37.76787 and 57.63155 when compared with multi-scale GLCM-based features
Table 4. Land cover classification scheme.
Level 1 Level 2
Built-up area BuildingBuilding shadowRoad
Vegetation ForestGrasslandCropland (including irrigated field and nonirrigated field)Vegetable plot (including vegetable greenhouse)
Water body River and canalLake and pond
Bare land
Table 5. Parameter setting for multi-resolution segmentation.
Level Scale Colour Shape
Composition of shape
Smoothness Compactness
Level 3 110 0.8 0.2 0.5 0.5Level 2 90 0.9 0.1 0.4 0.6Level 1 40 0.8 0.2 0.4 0.6
International Journal of Image and Data Fusion 207
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
and fractal model-based features, respectively. Besides, we can see that the classificationaccuracy of multi-scale GLCM-based features is higher than fractal model-based features;by Z-statistic between them, which is 19.93228, we can further see that the former is moreeffective than the latter in terms of accuracy improvement.
The classification performance between traditional maximum likelihood classifier(MLC) and SVM was also compared using the same integrated features (features 1–7)as input. The overall accuracy of SVM classification is higher than that obtained usingMLC by 10%, Kappa coefficient is improved from 0.3816 to 0.4916.
4.4 Change detection result and the evaluation
By post-classification comparison, the change detection result was obtained as shownin Figure 4. Given the classification code 1 as bare land, 2 as built-up area, 3 as vegetation
Figure 2. Object-oriented classification result of SPOT5 image.
Table 6. Correlation matrix of multi-scale GLCM-based features and fractal model-based features.
Correlation Feature 1 Feature 2 Feature 3 Feature 4 Feature 5 Feature 6 Feature 7
Feature 1 1.0000 0.5349 �0.1568 �0.1820 �0.2498 �0.0740 �0.2217Feature 2 0.5349 1.0000 0.2902 �0.3043 �0.3096 0.0366 �0.0192Feature 3 �0.1568 0.2902 1.0000 �0.0542 0.1876 �0.0638 0.4268Feature 4 �0.1820 �0.3043 �0.0542 1.0000 0.4278 �0.2024 �0.1053Feature 5 �0.2498 �0.3096 0.1876 0.4278 1.0000 �0.2545 0.1756Feature 6 �0.0740 0.0366 �0.0638 �0.2024 �0.2545 1.0000 0.0860Feature 7 �0.2217 �0.0192 0.4268 �0.1053 0.1756 0.0860 1.0000
208 Y. Zeng et al.
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
Figure 3. Classification result of RADATSAT-1 image.
Table 7. Features used for classification and the accuracy.
Group no. Features for classificationOverall
accuracy (%)Kappa
coefficient
1 Multi-scale GLCM-based features þ fractalmodel-based features
69.8926 0.4916
2 Multi-scale GLCM-based features 67.2711 0.46803 Fractal model-based features 66.8600 0.44524 Multi-scale GLCM þ fractal dimension 68.1001 0.4690
Multi-scale GLCM þ multi-fractal features 68.2594 0.4700Multi-scale GLCM þ lacunarity 68.5601 0.4751
5 Multi-scale GLCM þ fractal dimension þmulti-fractal features
68.5857 0.4789
Multi-scale GLCM þ fractal dimension þlacunarity
69.4600 0.4803
Multi-scale GLCM þ multi-fractalfeatures þ lacunarity
69.4831 0.4889
6 SAR intensity image 54.4739 0.2395SAR backscattering coefficient image 45.9268 0.2139
Table 8. Comparison of Z-statistic.
Z-value Group 1 Group 2 Group 3
Group 1 – – –Group 2 37.76787 – –Group 3 57.63155 19.93228 –
International Journal of Image and Data Fusion 209
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
and 4 as water body, the black in this figure represents no change, the green representspositive difference values and the red represents negative difference values.
In order to build an error matrix, the amount of samples was generated fromthe multinomial distribution (Khorram et al. 1999). For four classes, when the desiredconfidence level was selected to be 85%, and the desired precision was set at 7.5%, a totalof 197 samples would be at least required. Three times the calculated sample size were usedin the accuracy assessment. Special effort sampling approach was employed, with 67%sampling effort dedicated to the changed area, and 33% sampling effort dedicated to theunchanged area. The size of each sample is 3� 3 pixel. The collapsed change detectionerror matrix of post-classification comparison is listed in Table 9. The overall accuracyof the change detection is 62.7%. The omission error is 6.2%, and the commission erroris 31.1%.
The spatial distribution of uncertainty in classification results and the change detectionis illustrated in Figure 5. For classification results, the range of probability entropy is from0 to 2; for change detection, the range of probability entropy is from 0 to 4. When entropyvaries from 0 to its maximum, it indicates uncertainty varies from absolute certain
Figure 4. Change detection result by post-classification comparison.
Table 9. Collapsed change detection error matrix of post-classificationcomparison.
Reference data
Sum in rowUnchanged Changed
Classification data Unchanged 163 37 200Changed 187 213 400
Sum in column 350 250 600
210 Y. Zeng et al.
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
to absolute uncertain. The frequency distribution of uncertainty was further computedby dividing the range of entropy into 10 intervals and by counting the number of pixelsfalling in between each interval (Tables 10–12).
In Figure 5(a), we can see that the SPOT5 image has a higher classification certainty,especially for built-up areas; some bare land and harvested farmland have lower certaintybecause of their spectral similarity. The statistical results in Table 10 show that 92.96%pixels in SPOT5 classified image are within the interval 0–1.0, which further indicates
Figure 5. Spatial distribution of uncertainty represented by probability entropy: (a) uncertaintyof SPOT5 image classification; (b) uncertainty of RADARSAT-1 image classification; and(c) propagated change detection uncertainty.
Table 10. Frequency distribution of classification uncertainty of 2004 SPOT5 image.
Interval ofentropy (D)
0–0.2 0.2–0.4 0.4–0.6 0.6–0.8 0.8–1.0 1.0–1.2 1.2–1.4 1.4–1.6 1.6–1.8 1.8–2.0
Pixel % 46.71 1.03 2.31 27.57 15.34 6.15 0.89 0.0 0.0 0.0Accumulative % 46.71 47.74 50.05 77.62 92.96 99.11 100.0 100.0 100.0 100.0
International Journal of Image and Data Fusion 211
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
a higher classification certainty. For RADARSAT-1 classified image, class centre pixelshave higher certainty compared with edge pixels, and uncertainty gradually decreasesfrom centre to edge; some white clusters were caused by classification error, e.g. a built-uparea on top left image was misclassified as water body. Statistical results in Table 11 showthat 63.63% and 18.38% pixels fall into intervals 0–0.4 and 1.0–1.4, respectively, it furtherindicates higher uncertainty in class edges and higher certainty in class centres inRADARSAT-1 classification; compared with Table 10, 81.62% pixels are within theinterval 0–1.0, which indicates a lower certainty compared with SPOT5 classification.Change detection uncertainty can be visualised in Figure 5(c). From Figure 5(c) and
Figure 6. Soft-decision change detection based on rules.
Table 11. Frequency distribution of classification uncertainty of 2005 RADARSAT-1 image.
Interval ofentropy (D)
0–0.2 0.2–0.4 0.4–0.6 0.6–0.8 0.8–1.0 1.0–1.2 1.2–1.4 1.4–1.6 1.6–1.8 1.8–2.0
Pixel % 40.26 23.37 14.49 2.47 1.03 10.53 7.85 0.0 0.0 0.0Accumulative % 40.26 63.63 78.12 80.59 81.62 92.15 100.0 100.0 100.0 100.0
Table 12. Frequency distribution of propagated change detection uncertainty.
Interval ofentropy (D)
0–0.4 0.4–0.8 0.8–1.2 1.2–1.6 1.6–2.0 2.0–2.4 2.4–2.8 2.8–3.2 3.2–3.6 3.6–4.0
Pixel % 39.91 6.94 30.97 2.54 14.19 4.81 0.64 0.0 0.0 0.0Accumulative % 39.91 46.85 77.82 80.36 94.55 99.36 100.0 100.0 100.0 100.0
212 Y. Zeng et al.
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
Table 12, it can be seen that because of error propagation, change detection uncertaintyis higher than each classification result; the main distribution interval changes from 0–1.0to 0–2.0 (with nearly 95% pixels). Taking the value range [0, 4] into consideration,the majority entropy values are located in left-to-middle part of the range, which indicatesthat the change detection result still has the acceptable certainty level.
Soft-decision change detection is illustrated in Figure 6. By employing the rules inturn on post-classification comparison results, part of false changes caused bymisclassification were eliminated. Statistics shows that after executing rules 1–3, detectedchanges decreased by 22.6%; after executing rules 4–6, detected changes decreased byanother 28.9%.
From Table 13, we can see that soft-decision approach generates improved changedetection result, the omission error is 6.5%, the commission error is 15.3%; compared withpost-classification comparison, the overall accuracy is improved from 62.7% to 78.2%,and the commission error is reduced by half.
The statistics of the area change of land cover types is given in Table 14. It can be seenfrom the table that from October 2004 to October 2005, the main land cover change is thereduction of vegetation, and the increase of built-up area and bare land. By furtheranalysing the SPOT5 classified image acquired in 2004, we found that for the change fromvegetation to built-up area, 20% of vegetation is cropland and the rest is grassland. Thisstudy area is located at the downstream region of the Haihe River. Because of its highannual mean relative humidity, plants grow very well, and bare land here usuallyrepresents the land which is just exploited as built-up area. The reduction of vegetation,and the increase of built-up area and bare land well reflects the fact and trend of urbanexpansion, and occupation of cropland, unused land, etc. which has happened or ishappening at urban fringe areas.
Table 14. Area change of land cover types.
2005
2004
Vegetation Built-up area Bare land Water body
Vegetation – 0.0 0.0 0.39Built-up area 149.0 – 7.5 4.8Bare land 173.6 0.0 – 18.8Water body 7.8 0.0 0.0 –
Note: Unit: hectare.
Table 13. Collapsed change detection error matrix of soft-decision change detection.
Reference data
Sum in rowUnchanged Changed
Classification data Unchanged 161 39 200Changed 92 308 400
Sum in column 253 347 600
International Journal of Image and Data Fusion 213
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
5. Conclusions
In this article, the methods and results of different temporal SAR and optical image fusionfor land cover change detection were presented. Results indicate that for most land coverchange trajectories, it is hard to locate the change by pixel-level image fusion. It alsoconcludes that decision-level image fusion can well satisfy the needs; by object-orientedimage analysis and multi-textural analysis based on SVM, more accurate land coverinformation can be obtained from each sensor for further joint decision; compared withhard-decision change detection, soft-decision approach effectively eliminates the over-estimated changes and improves the change detection accuracy.
Acknowledgements
This research was funded partially by Open Research Fund Program of the Key Laboratoryof Geomatics and Digital Technology, Shandong Province, China (SD040207), National HighTechnology Research and Development Program of China (2009AA122003), National NaturalScience Foundation of China (40801178), the Research Fund for the Doctoral Program of ITC andNSERC scholarship.
References
Benz, U.C., et al., 2004. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for
GIS-ready information. ISPRS Journal of Photogrammetry and Remote Sensing, 58 (3/4),
239–258.Bo, Y. and Wang, J., 2003. Uncertainty in remote sensing. Beijing: Geological Publishing House.
Chaudhuri, B.B. and Sarkar, N., 1995. Texture segmentation using fractal dimension.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 17 (1), 72–77.
Chavez, P.S., Berlin, G.L., and Sowers, L.B., 1982. Statistical methods for selecting Landsat MSS
ratios. Journal of Applied Photographic Engineering, 8 (1), 23–30.Clausi, D.A., 2000. Comparison and fusion of co-occurrence, Gabor and MRF texture features
for classification of SAR sea-ice imagery. Atmosphere-Ocean, 39 (3), 183–194.Clausi, D.A. and Yue, B., 2004. Comparing co-occurrence probabilities and Markov random fields
for texture analysis. IEEE Transactions on Geoscience and Remote sensing, 42 (1), 215–228.Duda, R.O., Hart, P.E., and Stork, D.G., 2001. Pattern classification. 2nd ed. New York: Wiley.eCognition 4.0 User Guide, 2002. Definiens Imaging GMBH, Germany.
ERDAS Field Guide, 2005. Leica Geosystems Geospatial Imaging, USA.Foody, G.M., et al., 1992. Derivation and applications of probabilistic measures of class
membership from the maximum likelihood classification. Photogrammetric Engineering and
Remote Sensing, 58 (10), 1335–1341.Franklin, S.E., 2001. Remote sensing for sustainable forest management. Boca Raton, FL: Lewis
Publishers.Fuller, R.M., Smith, G.M., and Deveraux, B.J., 2003. The characterization and measurement of land
cover change through remote sensing: problems in operational applications? International
Journal of Applied Earth Observation and Geoinformation, 4 (3), 243–253.Gallego, F.J., 2004. Remote sensing and land cover area estimation. International Journal of Remote
Sensing, 25 (15), 3019–3047.Goodchild, M.F., Sun, G.Q., and Yang, S.R., 1992. Development and test of an error model for
categorical data. International Journal of Geographical Information Science, 6 (2), 87–104.Guo, H.D., 2000. Theories and applications of radar systems for earth observation. Beijing: Science
Publisher.
214 Y. Zeng et al.
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014
Hall, D.L., 1992. Mathematical techniques in multisensor data fusion. Norwood: Artech House Inc.Khorram, S., et al., 1999. Accuracy assessment of remote sensing-derived change detection. USA:
American Society for Photogrammetry and Remote Sensing.Maillard, P., 2003. Comparing texture analysis methods through classification. Photogrammetric
Engineering and Remote Sensing, 69 (4), 357–367.Pohl, C. and van Genderen, J.L., 1998. Multisensor image fusion in remote sensing: concepts,
methods and applications. International Journal of Remote Sensing, 19 (5), 823–854.
Shi, W.Z. and Ehlers, M., 1996. Determining uncertainties and their propagation in dynamic changedetection based on classified remotely-sensed images. International Journal of Remote Sensing,17 (14), 2729–2741.
Soh, L.K. and Tsatsoulis, C., 1999. Texture analysis of SAR sea ice imagery using grey levelco-occurrence matrices. IEEE Transactions on Geoscience and Remote Sensing, 37 (2),780–794.
Tuceryan, M. and Jain, A.K., 1993. Handbook of pattern recognition and computer vision. Singapore:World Scientific.
Xu, L. and Krzyzak, A., 1992. Methods of combining multiple classifiers and their applicationsto handwriting recognition. IEEE Transactions on Systems of Man and Cybernetics, 22 (3),
418–435.
International Journal of Image and Data Fusion 215
Dow
nloa
ded
by [
Nan
yang
Tec
hnol
ogic
al U
nive
rsity
] at
22:
23 0
4 N
ovem
ber
2014