hyper-spectral image analysis with partially-latent regression

6
HAL Id: hal-01019360 https://hal.archives-ouvertes.fr/hal-01019360 Submitted on 7 Jul 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Hyper-spectral Image Analysis with Partially-Latent Regression Antoine Deleforge, Florence Forbes, Radu Horaud To cite this version: Antoine Deleforge, Florence Forbes, Radu Horaud. Hyper-spectral Image Analysis with Partially- Latent Regression. European Signal Processing Conference, Sep 2014, Lisbon, Portugal. IEEE, pp.1572 - 1576, 2014, <http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6952574>. <hal- 01019360>

Upload: truongdang

Post on 19-Jan-2017

230 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Hyper-spectral Image Analysis with Partially-Latent Regression

HAL Id: hal-01019360https://hal.archives-ouvertes.fr/hal-01019360

Submitted on 7 Jul 2014

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.

Hyper-spectral Image Analysis with Partially-LatentRegression

Antoine Deleforge, Florence Forbes, Radu Horaud

To cite this version:Antoine Deleforge, Florence Forbes, Radu Horaud. Hyper-spectral Image Analysis with Partially-Latent Regression. European Signal Processing Conference, Sep 2014, Lisbon, Portugal. IEEE,pp.1572 - 1576, 2014, <http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6952574>. <hal-01019360>

Page 2: Hyper-spectral Image Analysis with Partially-Latent Regression

HYPER-SPECTRAL IMAGE ANALYSIS WITH PARTIALLY-LATENT REGRESSION

Antoine Deleforge, Florence Forbes and Radu Horaud

INRIA Grenoble Rhone-Alpes, France

ABSTRACT

The analysis of hyper-spectral images is often needed to re-cover physical properties of planets. To address this inverseproblem, the use of learning methods have been consideredwith the advantage that, once a relationship between physicalparameters and spectra has been established through training,the learnt relationship can be used to estimate parametersfrom new images underpinned by the same physical model.Within this framework, we propose a partially-latent regres-sion method which maps high-dimensional inputs (spectralimages) onto low-dimensional responses (physical param-eters). We introduce a novel regression method that com-bines a Gaussian mixture of locally-linear mappings witha partially-latent variable model. While the former makeshigh-dimensional regression tractable, the latter enables todeal with physical parameters that cannot be observed or,more generally, with data contaminated by experimental arti-facts that cannot be explained with noise models. The methodis illustrated on images collected from the Mars planet.

Index Terms— Hyper-spectral images; Regression; Di-mension reduction; Mixture models; Latent variable model.

1. INTRODUCTION

The analysis of hyper-spectral images often involves to solvean inverse problem to deduce a number of physical parametervalues from the observed spectra. This typically requires theestimation of a high-dimensional to low-dimensional map-ping, which is challenging. Among others, statistical learningmethods have been considered with the advantage that, oncea relationship between physical parameters and spectra hasbeen established through training, the learnt relationship cansubsequently be used to estimate parameters from new imagesunderpinned by the same physical model [1]. To obtain train-ing data, radiative transfer models have been developed, thatlink the chemical composition, the granularity, or the physi-cal state, to the observed spectrum. They are generally usedto simulate huge collections of spectra in order to perform theinversion of hyper-spectral images [2].

Support from the European Research Council (ERC) through the Ad-vanced Grant VHIA (#340113) is greatly acknowledged.

Within this framework, we propose a novel regressionmethod. Regression is a fully supervised method that usesinput-response pairs of observed data to infer a mapping, andthis mapping is then used to predict a response value from anobserved input. It is well known that high-dimensional re-gression is a difficult problem because of the very high num-ber of parameters to be estimated. To address this problemwe propose a probabilistic framework, first to learn the pa-rameters of a low-to-high Gaussian mixture of locally-linearregressions, and second to derive the posterior conditionaldensity of the response given an observed input and the es-timated model parameters. We show that, by exchanging theroles of input and response variables during training, high-dimensional regression becomes tractable and the conditionaldensity of the response has a closed-form solution. More-over, we propose to incorporate a latent component: while thehigh-dimensional variable remains fully observed, the low-dimensional variable is the concatenation of an observed vec-tor and of an unobserved (latent) vector. The latent compo-nent enables us to deal with those physical parameters thatcannot be observed during training, or to prevent the observedparameters to be contaminated by experimental artifacts thatcannot be explained with noise models.

This model is particularly suited for hyper-spectral imag-ing, because the computing resources required by radiativetransfer models to generate training data increases exponen-tially with the number of parameters. Thus, they are generallyrestricted to a small number of parameters, e.g., abundanceand grain size of the main chemical components. Other pa-rameters, such as those related to meteorological variabilityor the incidence angle of the spectrometer are not explicitlymodeled and measured. Our method allows to deal with suchnon-anotated effects by explicitely incorporating a latent partin the low dimensional response variable. This allows someform of slack in the observations due to unknown effects.

Regression has been extensively studied in the literature.To deal with high-dimensional input data, most methods firstreduce the dimensionality based on response variables, be-fore performing the actual regression [3–5]. Such two-stepapproaches cannot be conveniently expressed in terms of asingle optimization problem. To estimate non-linear map-pings, mixtures of locally linear models have been investi-gated [6–8], but not in the case of high dimensional inputs

Page 3: Hyper-spectral Image Analysis with Partially-Latent Regression

and partially-observed responses. An alternative popular ap-proach is to use kernel functions, [4, 9–11], with the draw-back that these functions cannot be appropriately chosen au-tomatically, and that the mappings learned cannot be inverted.A more detailed description of the proposed approach andits relation to existing methods is available as a research re-port [12].

2. REGRESSION WITH PARTIALLY-LATENTRESPONSE VARIABLES

We describe a regression method that maps a low-dimensionalvariable X onto a high-dimensional variable Y . We start bydescribing the GLLiM (Gaussian Locally Linear Mapping)model, in which both X and Y are fully observed. Then weintroduce the hybrid GLLiM (hGLLiM) model that treats Xas a partially latent variable. Once the parameters of the hy-brid GLLiM model are estimated, we provide a closed-formexpression for the high-dimensional to low-dimensional map-ping. GLLiM relies on a piecewise linear model in the follow-ing way. Let {xn}n=N

n=1 ∈ RL, {yn}n=Nn=1 ∈ RD, L� D, and

(y,x) is a realization of (Y ,X), such that y is the image ofxby an affine transformation τk, among K, plus an error term.This is modeled by a missing variable Z such that Z = k ifand only if Y is the image ofX by τk.

The following decomposition of the joint probability dis-tribution will be used: p(Y = y,X = x;θ) =

∑Kk=1 p(Y =

y|X = x, Z = k;θ)p(X = x|Z = k;θ)p(Z = k;θ),where θ denotes the vector of model parameters. The locallyaffine function that mapsX onto Y is:

Y =K∑

k=1

I(Z = k)(AkX + bk +Ek) (1)

where I is the indicator function, matrix Ak ∈ RD×L andvector bk ∈ RD define the transformation τk and Ek ∈ RD

is an error term capturing both the observation noise in RD

and the reconstruction error due to the local affine approxima-tion. Under the assumption that Ek is a zero-mean Gaussianvariable with covariance matrix Σk ∈ RD×D that does notdepend onX , Y , and Z, we obtain:

p(Y = y|X = x, Z = k;θ) = N (y; Akx+ bk,Σk). (2)

To complete the above hierarchical definition and enforce theaffine transformations to be local, X is assumed to follow amixture of K Gaussians defined by

p(X = x|Z = k;θ) = N (x; ck,Γk)p(Z = k;θ) = πk

where ck ∈ RL, Γk ∈ RL×L and∑K

k=1 πk = 1. The param-eters of this model are:

θ = {ck,Γk, πk,Ak, bk,Σk}Kk=1. (3)

The model just described can be learned with standardEM inference methods if X and Y are both observed. Thekey idea in this paper is to treat X as a partially-latent vari-able, namely

X =[TW

],

where T ∈ RLt is observed and W ∈ RLw is latent (L =Lt + Lw). In hybrid GLLiM, parameter estimation uses ob-served pairs {yn, tn}Nn=1 while it must also be constrainedby the presence of the latent variableW . This can be seen asa latent-variable augmentation of classical regression, wherethe observed realizations of Y are affected by the unobservedvariableW . It can also be viewed as a variant of dimensional-ity reduction since the unobserved low-dimensional variableW must be recovered from {(yn, tn)}Nn=1. The decomposi-tion of X into observed and latent parts implies that some ofthe model parameters must be decomposed as well, namelyck, Γk and Ak. Assuming the independence of T and Wgiven Z we write:

ck =[ct

k

cwk

],Γk =

[Γt

k 00 Γw

k

],Ak =

[At

k Awk

].

We devised two EM algorithms to estimate the param-eters of the proposed model. The principle of the sug-gested algorithms is based on a data augmentation strat-egy that consists of augmenting the observed variables withthe unobserved ones, in order to facilitate the subsequentmaximum-likelihood search over the parameters. Thereare two sets of missing variables, Z1:N = {Zn}Nn=1 andW 1:N = {W n}Nn=1, associated with the training data set(y, t)1:N = {yn, tn}Nn=1, given the number K of linearcomponents and the latent dimension Lw. Two augmenta-tion schemes arise naturally. The first scheme is referredto as general hybrid GLLiM-EM, or general-hGLLiM, andconsists of augmenting the observed data with both variables(Z,W )1:N while the second scheme, referred to as marginal-hGLLiM, consists of integrating out the continuous variablesW 1:N previous to data augmentation with the discrete vari-ables Z1:N . A particularly interesting feature of general-hGLLiM is that both the E-step and M-step can be computedin closed-form for various constraints on the noise covari-ances {Σk}Kk=1, including “equal for all k”, isotropic, anddiagonal constraints. On the other hand, marginal-hGLLiMonly has closed-form steps in the isotropic case, but is eas-ier to initialize. Therefore, the latter is used to initialize theformer in practice. Full details on the analytical steps of theresulting hGLLiM algorithm are given in [12], and a Matlabtoolbox implementing it is available online1.

Once the parameter vector θ has been estimated by the

1https://team.inria.fr/perception/gllim toolbox/

Page 4: Hyper-spectral Image Analysis with Partially-Latent Regression

algorithm, one can derive the following conditional density:

p(X = x|Y = y;θ∗) =K∑

k=1

π∗kN (y; c∗k,Γ∗k)∑K

j=1 π∗jN (y; c∗j .Γ

∗j )N (x; A∗ky + b∗k,Σ

∗k) (4)

which is a Gaussian mixture with K components. Notice thatthe parameters associated with this mixture:

θ∗ = {c∗k,Γ∗k, π∗k,A

∗k, b∗k,Σ

∗k}Kk=1 (5)

are obtained analytically from the learned parameters (3) us-ing the following formulae:

c∗k = Akck + bk, Γ∗k = Σk + AkΓkA>k , π∗k = πk,

A∗k = Σ∗kA>k Σ−1k , b∗k = Σ∗k(Γ−1

k ck − A>k Σ−1k bk),

Σ∗k = (Γ−1k + A>k Σ−1

k Ak)−1. (6)

To predict an output one can use the expectation of (4):

E[X|y;θ∗] =K∑

k=1

πkN (y; c∗k,Γ∗k)∑K

j=1 πjN (y; c∗j ,Γ∗j )

(A∗ky + b∗k). (7)

3. RETRIEVING MARS PHYSICAL PROPERTIESFROM HYPER-SPECTRAL IMAGES

Visible and near infrared imaging spectroscopy is a key re-mote sensing technique used to study and monitor planets. Itrecords the visible and infrared light reflected from the planetin a given wavelength range and produces cubes of data whereeach observed surface location is associated with a spectrum.Physical properties of a planet surface, such as chemical com-position, granularity, texture, etc., are some of the most im-portant parameters that characterize the morphology of thespectra. In case of Mars, radiative transfer models have beendeveloped to numerically evaluate the connection betweenthese parameters and observable spectra. Such models al-low to simulate spectra from a given set of parameter val-ues [2]. In practice, the goal is to scan Mars’s ground froman orbit in order to observe gas and dust in the atmosphereand seek for signs of specific materials such as silicates, car-bonates, or ice at the surface. We are therefore interested insolving the associated inverse problem, namely to infer physi-cal parameter values from the observed spectra. Since this in-verse problem cannot generally be solved analytically, the useof optimization or statistical methods has been investigated,e.g., [1]. In particular, supervised statistical learning has beenconsidered with the advantage that, once a relationship be-tween physical parameters and spectra has been establishedtrough training, the learned relationship can then be used forvery large datasets and for all new images underpinned by thesame physical model.

Within this category of methods, we investigate the poten-tial of the proposed hybrid GLLiM model using a dataset ofhyper-spectral images collected from the imaging spectrom-eter OMEGA instrument [13] onboard of the Mars expressspacecraft. To this end a database of synthetic spectra withtheir associated physical parameter values were generated us-ing a radiative transfer model. This database is composedof 15,407 spectra associated with five real parameter values,namely, (i) proportion of water ice, (ii) proportion of CO2 ice,(iii) proportion of dust, (iv) grain size of water ice, and (v)grain size of CO2 ice. Each spectrum is made of 184 wave-lengths. Hybrid GLLiM is used, first to learn a low-to-highdimensional regression function between physical parametersand spectra from the database, and second to estimate the theunknown physical parameters corresponding to a new spec-trum using the learned function. Since no ground truth isavailable for Mars, the synthetic database will also serve asa test dataset to evaluate the accuracy of the predicted param-eter values. In order to fully illustrate the potential of hy-brid GLLiM, we deliberately ignore two of the five param-eters in the database and consider them as latent variables.During training, we excluded the observed values of the pro-portion of water ice and the grain size of CO2 ice. A previousstudy [1] revealed that these two parameters were sensitiveto the same wavelengths, in comparison with the proportionof dust and are suspected to mix with the other two parame-ters in the synthetic transfer model. Therefore, not only thatthey are harder to estimate but they seem to affect the estima-tion of two other parameters. Moreover, we observed that ifthese two parameters are treated as observed variables duringthe learning stage, they tend to degrade the estimation of theother three parameters, which are of particular interest.

The hGLLiM algorithm was compared to JGMM (jointGaussian mixture model) [7], SIR (sliced inverse regression)[3], RVM (multivariate relevance vector machine) [11] andMLE (mixture of linear experts) [6]. SIR is used with one(SIR-1) or two (SIR-2) principal axes for dimensionality re-duction, 20 slices (the number of slices is known to havevery little influence on the results), and polynomial regres-sion of order three (higher orders did not show significantimprovements in our experiments). SIR quantizes the low-dimensional data X into slices or clusters which in turn in-duces a quantization of the Y -space. Each Y -slice (all pointsyn that map to the same X-slice) is then replaced with itsmean and PCA is carried out on these means. The result-ing dimensionality reduction is then informed by X valuesthrough the preliminary slicing. RVM may be viewed as amultivariate probabilistic formulation of support vector re-gression [14]. As all kernel methods, it critically dependson the choice of a kernel function. Using the authors’ freelyavailable code2, we ran preliminary tests to determine an op-timal kernel choice for each dataset considered. We tested

2http://www.mvrvm.com/Multivariate Relevance Vector

Page 5: Hyper-spectral Image Analysis with Partially-Latent Regression

Table 1. Normalized root mean squared error (NRMSE) for Mars surface physical properties recovered from hyper-spectralimages, using synthetic data and different methods.

Method Proportion of CO2 ice Proportion of dust Grain size of water iceJGMM 0.83± 1.61 0.62± 1.00 0.79± 1.09SIR-1 1.27± 2.09 1.03± 1.71 0.70± 0.94SIR-2 0.96± 1.72 0.87± 1.45 0.63± 0.88RVM 0.52± 0.99 0.40± 0.64 0.48± 0.64MLE 0.54± 1.00 0.42± 0.70 0.61± 0.92

hGLLiM-1 0.36± 0.70 0.28± 0.49 0.45± 0.75hGLLiM-2∗† 0.34 ± 0.63 0.25 ± 0.44 0.39 ± 0.71

hGLLiM-3 0.35± 0.66 0.25± 0.44 0.39± 0.66hGLLiM-4 0.38± 0.71 0.28± 0.49 0.38± 0.65hGLLiM-5 0.43± 0.81 0.32± 0.56 0.41± 0.67

hGLLiM-20 0.51± 0.94 0.38± 0.65 0.47± 0.71hGLLiM-BIC 0.34 ± 0.63 0.25 ± 0.44 0.39 ± 0.71

14 kernel types with 10 different scales ranging from 1 to 30,hence, 140 kernels for each dataset in total.

An objective evaluation was done by cross validation. Weselected 10,000 training couples at random from the trainingset, tested on the 5,407 remaining spectra, and repeated this20 times. For all algorithms, training data were normalizedto have 0 mean and unit variance using scaling and translat-ing factors. These factors were then used on test data andestimated output to obtain final estimates. This techniqueshowed to noticeably improve results of all methods. We usedK = 50 for MLE, hGLLiM and JGMM. MLE and JGMMwere constrained with equal, diagonal covariance matrices asthey yield the best results. For RVM, the best out of 140 ker-nels was used. A third degree polynomial kernel with scale 6showed the best results using cross-validation on a subset ofthe database. As a quality measure of the estimated param-eters, we computed the normalized root mean squared error,or NRMSE3, which quantifies the difference between the es-timated and real parameter values. NRMSE is normalized en-abling direct comparison between the parameters which are ofvery different range: the closer to zero, the more accurate thepredicted values. Table 1 shows NRMSE values obtained forthe three parameters considered here. The ground-truth latentvariable dimension is L∗w = 2, and accordingly, the empiri-cally best dimension for hGLLiM was L†w = 2. hGLLiM-2outperformed all the other methods on that task, more pre-cisely the error is 36% lower than the second best-performingmethod, RVM, closely followed by MLE. No significant dif-ference was observed between hGLLiM-2 and hGLLiM-3.Note that due to the computation of the D × D kernel ma-trix, the computational and memory costs of RVM for train-ing were about 10 times higher than those of hGLLiM, usingMATLAB implementations. We also tested the selection ofthe latent dimension (Lw) based on BIC [12]. For each train-

3NMRSE =

r PMm=1(tm−tm)2PMm=1(tm−t)2

with t = M−1PM

m=1 tm.

ing set, the hGLLiM-BIC method minimized BIC with thelatent dimension in the range 0 ≤ Lw ≤ 20, and used thecorresponding model to perform the regression. Interestingly,hGLLiM-BIC performed very well on these large training sets(N = 10, 000) as it correctly selectedLw = 2 for the 20 train-ing sets used by BIC (the BIC selection could differ with thetraining set), yielding the same results as those obtained withhGLLiM-2.

Finally, we used an adequately selected subset of the syn-thetic database, e.g., [1] to train the algorithms, and test themon real data made of observed spectra. In particular, we focuson a dataset of Mars South polar cap. Since no ground truthis currently available for the physical properties of Mars’sSouth pole regions, we propose a qualitative evaluation usinghGLLiM-2 and the three best performing methods, among thetested ones, namely RVM, MLE and JGMM.

We used these four methods in order to extract physicalparameters from two hyper-spectral images that correspond,approximately, to the same area but from different view points(orbit 41 and orbit 61). Since we are looking for propor-tions between 0 and 1, values smaller than 0 or higher than1 are not acceptable and hence they were set to either oneof the bounds. As it can be seen in Fig. 1, hGLLiM-2 es-timates proportion maps with similar characteristics for thetwo view points, which suggests a good consistency of themethod. Such a consistency is not observed using the otherthree tested methods. In addition, RVM and MLE estimatea much higher number of values falling outside the interval[0, 1]. Moreover, hGLLiM-2 is the only method featuring lessdust at the South pole cap center and higher concentrationsof dust at the boundaries of the CO2 ice, which matches ex-pected results from planetology [15]. Finally, note that theproportions of CO2 ice and dust clearly seem to be comple-mentary using hGLLiM-2, while this complementarity is lessobvious using other methods.

Page 6: Hyper-spectral Image Analysis with Partially-Latent Regression

hGLLiM-2 RVM MLE JGMM(a) Proportion of dust

hGLLiM-2 RVM MLE JGMM(b) Proportion of CO2 ice

Fig. 1. Proportions obtained with our method (hGLLiM-2) and three other methods. The data correspond to hyper-spectralimages grabbed from two different viewpoints of the South polar cap of Mars. Left: orbit 41, Right: orbit 61. White areascorrespond to unexamined regions, where the synthetic model does not apply.

4. CONCLUSIONS

We proposed a new high-dimensional regression method al-lowing for partially latent responses. This novel approachoutperformed 4 state-of-the-art regression methods in hyper-spectral image analysis. A promising extension would be totake into account dependencies between adjacent pixels inimages to enforce smoothness. This could be done usingMarkov Random field. More complex noise models couldalso be investigated via Student distributions (e.g. [8]) to al-low for outliers accommodation and more robust estimation.

REFERENCES

[1] C. Bernard-Michel, S. Doute, M. Fauvel, L. Gardes, andS. Girard, “Retrieval of Mars surface physical proper-ties from OMEGA hyperspectral images using regular-ized sliced inverse regression,” Journal of GeophysicalResearch: Planets, vol. 114, no. E6, 2009.

[2] S. Doute, E. Deforas, F. Schmidt, R. Oliva, andB. Schmitt, “A comprehensive numerical package forthe modeling of Mars hyperspectral images,” in Lunarand Planetary Science XXXVIII, March 2007.

[3] K. C. Li, “Sliced inverse regression for dimension re-duction,” Journal of the American Statistical Associa-tion, vol. 86, no. 414, pp. 316–327, 1991.

[4] H.M. Wu, “Kernel sliced inverse regression with appli-cations to classification,” Journal of Computational andGraphical Statistics, vol. 17, no. 3, pp. 590–610, 2008.

[5] K. P. Adragni and R. D. Cook, “Sufficient dimensionreduction and prediction in regression,” PhilosophicalTransactions of the Royal Society A, vol. 367, no. 1906,pp. 4385–4405, 2009.

[6] L. Xu, M. I. Jordan, and G. E. Hinton, “An alternativemodel for mixtures of experts,” in Advances in NeuralInformation Processing Systems. December 1995, pp.633–640, MIT Press, Cambridge, MA, USA.

[7] Y. Qiao and N. Minematsu, “Mixture of probabilisticlinear regressions: A unified view of GMM-based map-ping techiques,” in IEEE International Conference onAcoustics, Speech, and Signal Processing, April 2009,pp. 3913–3916.

[8] S. Ingrassia, S. C Minotti, and G. Vittadini, “Local sta-tistical modeling via a cluster-weighted approach withelliptical distributions,” Journal of classification, vol.29, no. 3, pp. 363–401, 2012.

[9] M.E. Tipping, “Sparse Bayesian learning and the rele-vance vector machine,” The Journal of Machine Learn-ing Research, vol. 1, pp. 211–244, 2001.

[10] N. Lawrence, “Probabilistic non-linear principal com-ponent analysis with gaussian process latent variablemodels,” The Journal of Machine Learning Research,vol. 6, pp. 1783–1816, 2005.

[11] A. Thayananthan, R. Navaratnam, B. Stenger, P. Torr,and R. Cipolla, “Multivariate relevance vector ma-chines for tracking,” in European Conference on Com-puter Vision. 2006, pp. 124–138, Springer.

[12] A. Deleforge, F. Forbes, and R. Horaud, “High-Dimensional Regression with Gaussian Mixtures andPartially-Latent Response Variables,” Statistics andComputing, 2014.

[13] J-P Bibring et al., “OMEGA: Observatoire pour laMineralogie, l’Eau, les Glaces et l’Activite,” in MarsExpress: The Scientific Payload, 2004, vol. 1240, pp.37–49.

[14] A. J. Smola and B. Scholkopf, “A tutorial on supportvector regression,” Statistics and computing, vol. 14,no. 3, pp. 199–222, 2004.

[15] S. Doute et al., “Nature and composition of the icy ter-rains of the south pole of Mars from MEX OMEGAobservations,” in Lunar and Planetary Science XXXVI,March 2005.