enhanced spatial priors for segmentation of magnetic resonance imagery

12
Enhanced Spatial Priors for Segmentation of Magnetic Resonance Imagery Tina Kapur 1 , W. Eric L. Grimson 1 , Ron Kikinis 2 , William M. Wells 12 corresponding author: [email protected] Abstract A framework for probabilistic segmentation of Magnetic Resonance (MR) images is proposed which utilizes three types of models: intensity models to capture the graylevel appearance of a structure, relative-spatial models which describe the spatial relationships between structures in a subject-specific reference frame, and shape models to describe the shape of structures in a subject- independent reference frame. A vast literature exists on intensity as well as shape models, and the main contribution of this work is the development of two relative-spatial models. A discrete vector valued Markov Random Field (MRF) model, whose parameters are derived from training data, is used to capture the piecewise homogeneity (white matter is likely to occur next to white matter) and the neighborwise compatibility (white matter is unlikely to occur next to skin) of different tissues. The continuous Mean Field solution to the MRF is recovered using Expectation- Maximization algorithm, and is a probabilistic segmentation of the image. The second model is a conditional spatial distribution, also created using training data, on specific structures (such as cartilage, or brain tissue) conditioned on the geometry of other structures (such as trabecular bone, or ventricles). The motivation is to bootstrap the segmentation process using spatial relationships to structures that image well using MR and hence are segmented easily. Results are presented for the segmentation of white matter, gray matter, fluid, and fat in Gradient Echo MR images of the brain, and for the segmentation of trabecular bone in T2-weighted MR images of the knee. 1 Introduction The automatic segmentation of anatomical structure from graylevel imagery such as MRI or CT will likely benefit from the exploitation of three different kinds of knowledge: an intensity model that describes graylevel appearance of the structure (e.g. fluid appears bright in T2-weighted MRI), a relative spatial model that describes the spatial layout of the structure in a subject-specific reference frame (e.g. femoral cartilage is attached to the subject’s femur), and a shape model that describes the shape of the structure in a subject-independent reference frame (e.g. the brain-stem is tube-like). Intensity Models: The simplest intensity model – a zeroth order model – is implemented using the graylevel-thresholding operation. It is used, for example, in segmenting skin in MRI []. A second-order intensity model, the local variance of graylevels, is effective in capturing the appearance of trabecular bone in clinical MRI of the knee [12]. Gaussian intensity models, parametrized by their means and 1 Massachusetts Institute of Technology, Artificial Intelligence Laboratory, Cambridge MA 2 Department of Radiology, Harvard Medical School and Brigham and Womens Hospital, Boston MA

Upload: independent

Post on 07-May-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Enhanced Spatial Priors for Segmentation of Magnetic Resonance

Imagery

Tina Kapur 1, W. Eric L. Grimson 1,Ron Kikinis2, William M. Wells 12

corresponding author: [email protected]

Abstract

A framework for probabilistic segmentation of Magnetic Resonance (MR) images is proposedwhich utilizes three types of models: intensity models to capture the graylevel appearance of astructure, relative-spatial models which describe the spatial relationships between structures in asubject-specific reference frame, and shape models to describe the shape of structures in a subject-independent reference frame. A vast literature exists on intensity as well as shape models, andthe main contribution of this work is the development of two relative-spatial models. A discretevector valued Markov Random Field (MRF) model, whose parameters are derived from trainingdata, is used to capture the piecewise homogeneity (white matter is likely to occur next to whitematter) and the neighborwise compatibility (white matter is unlikely to occur next to skin) ofdifferent tissues. The continuous Mean Field solution to the MRF is recovered using Expectation-Maximization algorithm, and is a probabilistic segmentation of the image. The second model isa conditional spatial distribution, also created using training data, on specific structures (such ascartilage, or brain tissue) conditioned on the geometry of other structures (such as trabecular bone,or ventricles). The motivation is to bootstrap the segmentation process using spatial relationshipsto structures that image well using MR and hence are segmented easily. Results are presented forthe segmentation of white matter, gray matter, fluid, and fat in Gradient Echo MR images of thebrain, and for the segmentation of trabecular bone in T2-weighted MR images of the knee.

1 Introduction

The automatic segmentation of anatomical structure from graylevel imagery such as MRI or CT willlikely benefit from the exploitation of three different kinds of knowledge: an intensity model thatdescribes graylevel appearance of the structure (e.g. fluid appears bright in T2-weighted MRI), arelative spatial model that describes the spatial layout of the structure in a subject-specific referenceframe (e.g. femoral cartilage is attached to the subject’s femur), and a shape model that describes theshape of the structure in a subject-independent reference frame (e.g. the brain-stem is tube-like).

Intensity Models: The simplest intensity model – a zeroth order model – is implemented using thegraylevel-thresholding operation. It is used, for example, in segmenting skin in MRI []. A second-orderintensity model, the local variance of graylevels, is effective in capturing the appearance of trabecularbone in clinical MRI of the knee [12]. Gaussian intensity models, parametrized by their means and

1Massachusetts Institute of Technology, Artificial Intelligence Laboratory, Cambridge MA2Department of Radiology, Harvard Medical School and Brigham and Womens Hospital, Boston MA

variances, are often used to classify white matter and gray matter in brain MRI. Richer models forgraylevel appearance were proposed in [20], and have not yet been used with medical images.

Relative Spatial Models: Perhaps the simplest relative-spatial model used in the segmentation ofmedical images is that of connectivity (or lack thereof) between structures. The brain is often modelledas being disconnected from the skin [4]. A richer model uses the average normalized distance betweenthe brain and the skin in the segmentation of brain after segmenting skin [3]. Cortical gray matteris modelled as a crumpled sheet of uniform thickness around white matter, and its segmentationbootstrapped using segmented white matter [18]. The average-brain of [6] provides a spatial relationshipbetween brain tissue and the landmarks used to normalize the brain (the AC-PC line, and the mid-sagittal plane).

Shape Models: The simplest shape model – and one that is sufficient for a fair number of anatomicalstructures – is that of a convex object. This model is easily encoded using operations of “opening” and“closing” from mathematical morphology, and is often used to remove thin, small structures (such asmuscle-fibers, vessels, nerves) that are attached to the structure of interest (such as skin, bone) []. Adifferent flavor of shape models is derived using the “snake” formulation [14, 5] and used to encode thesmoothness of object boundaries. Convexity as well as smoothness-of-boundary are weak constraintson shape, and do not use statistics gathered over populations 1. Stronger constraints on the globalshape of the boundary of an object can be encoded using a probability distribution on its fourier orother coefficients over a population [16]. Additional local constraints on shape can be representedusing the modes of variation (i. e. principal component analysis) of landmarks across a population[7, 17]. It bears mentioning that the difference between shape models and relative-spatial models issubtle. By definition, a relative-spatial model describes the spatial relationship between two or morestructures, while a shape model captures the allowed variations for a given structure. There are someshape models, however, that encode and utilize relationships between different anatomical structures[15].

In this work, a framework is presented in which the three types of models can coexist. The maincontribution is the development of two relative-spatial models. A Markov Random Field (MRF) model,whose parameters are derived from training data, is used to capture the piecewise homogeneity (whitematter is likely to occur next to white matter) and the neighborwise compatibility (white matter isunlikely to occur next to skin) of different tissues.

The second model is a conditional spatial distribution, also created using training data, on specificstructures (such as cartilage, or brain tissue) conditioned on the geometry of other structures (suchas trabecular bone, or ventricles). The motivation is to represent global, anatomy-specific informationsuch as “femoral cartilage is attached to the base of the femur”, or that “cortical gray matter surroundswhite matter”, and to bootstrap the segmentation process using spatial relationships to structures thatimage well using MR and hence are segmented easily. These conditional distributions associate witheach spatial coordinate in the image, a probability distribution over the different tissue classes (suchas “cartilage” or “not cartilage”) when the spatial location of a previously segmented structure (suchas bone) is known. A methodoly is presented for creating these anatomy-specific conditional spatialdistributions, and demonstrated on two different parts of the anatomy: the brain and the knee.

The paper begins with some background on MRI segmentation in Section 2, which also sets up thenotation used in later sections. The proposed framework and the relative-spatial models are describedin Sections 3 and 4. Following that, Section 5 shows some results for the segmentation of white matter,

1Though, to use either morphology or snakes effectively, the user must learn the optimal parameters settings over asufficient number of images.

2

gray matter, fluid/air, and skin/fat in Gradient Echo brain MRI, and for the segmentation of trabecularbone in T2-weighted knee MRI. Section 6 presents discussion as well as plans for future extensions.

2 Background on MRI Segmentation

The observed MRI signal can be modeled as a product of the true signal that is generated by the under-lying anatomy, and a slowly varying, non-linear gain artifact due to the the imaging equipment. Usingthis assumption, an iterative, supervised, Expectation-Maximization style segmentation algorithm wasdescribed in [19] that treats the underlying tissue classes as hidden variables and alternates betweenestimating those classes (E-step) and the maximally probable gain field (M-step). Key components ofthis algorithm are summarized below.

Intensity data is log-transformed, thus transforming the multiplicative gain field to an additive biasfield. Observed log intensity, Yij at each pixel is modeled as a normal distribution, independent of allother pixels:

p(Yij|Γij = k, βij) = N(Yij − βij ;µk, σk) , (1)

whereN(x;µ, σ) is the Gaussian distribution, with mean µ and variance σ2

Yij is the observed log intensity at pixel location (i, j)Γij is tissue class corresponding to intensity Yijµk is the mean intensity for tissue class kσk is the standard deviations in intensity of tissue class kβij is the bias field at pixel location (i, j).

The method used a prior probability distribution on the tissue labels Γ that is spatially stationaryand is given by

P (Γ) =∏i

P (Γi) . (2)

The bias field β is modeled as a multi-dimensional zero mean Gaussian random variable, to characterizeits spatial smoothness.

The E-step of the EM implementation computes the posterior tissue class probabilities when thebias field is known:

Wijk =P (Yij |Γij = k;βij)P (Γij = k)∑m P (Yij |Γij = m;βij)P (Γij = m)

, (3)

The M-Step computes the value of the bias field β that maximizes the average likelihood of obser-vation:

β = FR , (4)

whereRij =

∑k

Wijk ∗ (Yij − µk)σk2

, (5)

and F is a linear operator that can be approximated by smoothing filters. This step is equivalent toa MAP estimator of the bias field when the tissue probabilities W are known. Detailed derivations ofthese steps can be found in [19].

3

3 Relative-Spatial Model I: An MRF Prior

A contribution of this work is in the reformulation of the prior distribution on the tissue labels Γ (fromEq. 2) as a Markov Random Field 2 to represent the piecewise homogeneity and the neighborwisecompatibility of different tissues. The parameters of the MRF are obtained from manually labelledtraining data, and its Mean Field solution is recovered using the Expectation-Maximization frameworkdescribed in the previous section.

Some notation: S = {Sij|1 ≤ i ≤ m, 1 ≤ j ≤ n} is the lattice on which the MRF is defined, andeach site of this lattice – referred to as either Sij or simpy ij – corresponds to the pixel location (i, j) inthe image. N = {Nij |1 ≤ i ≤ m, 1 ≤ j ≤ n} defines the neighborhood system for the MRF, where Nij

refers to the four neighbors of pixel ij that share an edge with it i. e. Nij = {Si,j−1, Si,j+1, Si−1,j , Si+1,j}.The tissue labels Γ = {Γij |Sij ∈ S} are modeled as an MRF with the neighborhood system N on the

lattice S. Γij is a discrete-valued random-vector drawn from the set {[100 . . . 0]T , [010 . . . 0]T , . . . [000 . . . 1]T },and assigning the value [0 . . . 1 . . . 0]T (with a 1 in the kth position) to Γij is equivalent to assign-ing the kth tissue class to the pixel ij in the image. Γ satisfies the Markov condition, given by:P (Γij |S \ Sij) = P (Γij |Nij),∀ij, which means that the value of each random variable Γij in the fielddepends only on its four neighbors. This is a reasonable assumption for a large class of images, andespecially for medical images.

The Hammersley-Clifford theorem established the Markov-Gibbs equivalence and states that theprobability of a particular configuration γ of an MRF Γ can be computed using the Gibbs probabilityof its clique potentials over N :

P (Γ = γ) =1Ze−∑

cVc(γ) (6)

Here Vc is a clique potential which describes the prior probability of a particular realization of theelements of the clique c.

Observe that the spatially stationary prior on the labelled image Γ, given by Eq. 2 may be inter-preted as an MRF on the image lattice with zeroth order cliques i. e. there is no interaction betweensites of the lattice.

We impose local spatial coherence on the labelling Γ by using two-pixel, or first-order cliques.Clique potentials are computed using an Ising-like model 3 derived from training data i. e. the priorprobability of tissue class k and tissue class l occuring horizontally adjacent to each other is computedfrom manually labelled images. Thus, the prior probability on the labelling, Pnew(Γ) is no longerspatially stationary; It interacts with the labels in its neighborhood system.

Computing the field configuration with the maximum Gibbs probability is computationally in-tractable, so we use a Mean Field approximation to the general Markov model.

We approximate the values of the field Γ at the neighboring sites by their statistical means, andhence rewite a Mean Field approximation to Pnew(Γ), Pmf (Γ) as a product of single site probabilitiesPmf (Γij). 4

2MRFs have been used in computer vision to model a variety of spatial priors, most notably those of local and globalsmoothness [1, 10, ?].

3The Ising model for phase-transitions in ferromagnets simulates a finite sumber of spins arranged on a two-dimensionallattice. The spins interact only with their nearest neighbors.

4Mean Field approximation has been used for surface reconstruction in [9]. A different tractable MRF model for brainsegmentation, using an Iterated-Conditional-Modes solver has been explored in [11].

4

Pmf (Γ) =∏ij

Pmf (Γij ,Nij) , (7)

The single site probability Pmf (Γij) is written as a product of the old stationary prior P (Γij) andthe probability of each clique involving pixel ij:

Pmf (Γij) =1

ZP (Γij). ∗ Ph−(Γi,j−1). ∗ Ph+(Γi,j+1). ∗ Pv−(Γi−1,j). ∗ Pv+(Γi+1,j) (8)

whereΓij is the continuous MF approximation to Γij.∗ represents a component-wise multiplication of vectorsPh−(g) is the prior probability of horizontal co-occurence (east) of g and the other tissuesPh+(g) is the prior probability of horizontal co-occurence (west) of g and the other tissuesPv−(g) is the prior probability of vertical co-occurence (north) of g and the other tissuesPv+(g) is the prior probability of vertical co-occurence (south) of g and the other tissues

The prior probabilities of co-occurence are computed using manually labelled training data. Specif-ically,

Ph−(g) ≡ Λh−gPh+(g) ≡ Λh+gPv−(g) ≡ Λv−gPv+(g) ≡ Λv+g

where Λh− is a k x k matrix, where k is the number of tissue classes in the training data, and itsmnth element λh−,mn gives the prior probability of tissue class m and n occuring as a horizontal pairin the image, with the left pixel in the pair being tissue class n and the right one being m. The otherthree Λ’s are defined in a similar fashion.

***Give an intuition on this as conclusion of this section ***

4 Relative-Spatial Model II: Conditional Spatial Prior

The motivation for the work described in this section is to represent and utilize in segmentation“common-sense” global spatial relationships such as “white matter is closer to ventricles, while graymatter is closer to the skin”, and “femoral cartilage lies at the base of the femur”.

This method of conditional spatial prior construction was illustrated in [13] to segment femoralcartilage after the femur had been segmented in an image. Specifically, the relationship between thefemur and femoral cartilage was represented as a joint probability distribution on the distances andnormals between the surfaces of the two structures. This probability distribution was estimated bymeasuring for each cartilage point in the manually labelled training data, the distance to the closestpoint on the surface of the femur, and the local normal to the femur surface at that closest point.The estimated probability distribution was represented using a two-dimensional histogram. Given anovel image in which only the femur had been segmented, Bayes rule was used to combine the spatialprior (the two-dimensional histogram on distances and normals) with a Gaussian measurement modelfor intensity of cartilage, to predict the probability of cartilage at each point in the image. Hysteresisthresholding was used for classification of cartilage.

The general goal is to build a prior spatial distribution for a given structure, conditioned on thesegmentation of another. In order to do this, a compact representation of the relationship between the

5

structures involved (such as the distance-normal representation above of femur-cartilage relationship) isdetermined and then the probability distribution associated with the parameters of the representation(such as the joint histogram above) is measured from training data. Next, we show how to use thesegeneral principles to construct a spatial prior for brain tissue conditioned on easily segmented (i. e. bythresholding) structures such as skin and ventricles.

Constructing Spatial Prior on Brain Tissue Conditioned on Ventricles and SkinWe observe that the spatial relationship between the white matter and the skin and the ventricles

is well described by the distance (ds) between white matter and the skin, as well as the distance (dv)between white matter and the surface of the ventricles. Thus we describe this relationship by the classconditional probability density function:

P (dsi, dvi|xi ∈WM) (9)

wherexi are the spatial coordinates of the ith data voxelWM is the set of all voxels belonging to white matterS is the set of all voxels belonging to the skinV is the set of all voxels belonging to the ventriclesdsi is short for dsi(S), which is the distance from xi to the inside surface of the skindvi is short for dvi(V ), which is the distance from xi to the outside surface of the ventricles

A non-parametric estimate for this joint density function is constructed from examples in which theskin, the ventricles, and white matter have been manually labelled by experts. The training procedureis implemented using the following four steps applied sequentially to each image It in the training set:

1. Compute the Chamfer [2] distance dsi from each point in the image It to the inside surface ofthe skin.

2. Compute the Chamfer distance dvi from each point in the image It to the outside surface of theventricles.

3. For each white matter point in It, lookup two numbers: the distance dsi to the closest skin point(using the chamfer map computed in step 1) and the distance dvi to the closest ventricle point(using the chamfer map computed in step 2).

4. Histogram the values dsi and dvi jointly. If It is the last image in the training set, normalize thehistogram to obtain an empirical estimate for the joint density of dsi and dvi for white matter.

We construct P (dsi(S), dvi(V )|xi ∈ GM) – an estimate for the pdf describing the relationshipbetween gray matter (GM) and and the skin and the ventricles – in a similar fashion.

Note that a histogram has been used to estimate the density from sample points. Other methodssuch as Parzen Windowing for density estimation (discussed in [8]) could be used effectively as well.Note also that this model is a combination of heuristics and training from examples. The heuristicused is that distances to the skin and ventricles are pertinent parameters in the describing the relativespatial location of brain tissue. Training data is used to estimate probability distributions on theseparameters. We believe that information-theoretic schemes could be devised for automatically deducingthe relationships from examples as well.

Bayes rule allows us to express the posterior probability that a voxel should be classified as whitematter based on observations of its intensity and spatial relation to the skin and the ventricles (P (xi ∈

6

WM |dsi(S), dvi(V ), Ii)) as a product of the prior probability that a given voxel belongs to white matter(P (xi ∈WM) ) and the the class conditional density P (dsi(S), dvi(V ), Ii|xi ∈WM) as follows:

P (xi ∈WM |dsi, dvi, Ii) =P (dsi, dvi, Ii|xi ∈WM)P (xi ∈WM)

P (dsi, dvi, Ii)(10)

where xi, dsi, dvi, WM , S and V represent the same quantities as in Equation 9, and Ii is the intensityat xi.

This can be rewritten assuming independence between the intensity at a voxel and its spatialrelationship to skin and ventricles as:

P (xi ∈WM |dsi, dvi, Ii) =P (dsi, dvi|xi ∈WM)P (Ii|xi ∈WM)P (xi ∈WM)

P (dsi, dvi, Ii)(11)

The first term in the numerator on the right hand side is the class conditional density for the modelparameters, and is estimated using the method described above. The second term in the numeratoris a Gaussian intensity model for tissue class that is obtained from samples of white matter intensity.The third term in the numerator, P (xi ∈WM), is the prior probability that a voxel belongs to whitematter. This is computed as a ratio of the white matter volume to the total volume of the head in asegmented scan. The denominator is a normalization factor.

Using Conditional Spatial Prior on Brain TissueThis spatial prior can be used in conjunction with the MRF prior by direct replacement of the

spatially stationary prior on tissue class.

Ptotal(Γij = [1000]T ) =1

ZP (Γij = [1000]T |dsij , dvij, Iij). ∗ Ph−(Γi,j−1). ∗ Ph+(Γi,j+1). ∗ Pv−(Γi−1,j). ∗ Pv+(Γi+1,j)(12)

where Γij, .∗, Ph−(g), Ph+(g), Pv−(g), Pv+(g) represent the same quantities as in Eq. .

5 Results

5.1 Gradient Echo Brain MRI with MF

5.2 Gradient Echo Brain MRI with MF and Spatial Prior

5.3 T2 Weighted Knee MRI

6 Future Work

The formalism for the proposed method extends naturally to three-dimensions, and we plan to im-plement a distributed version (threaded, for multiprocessor Ultra workstations) of the algorithm inthree-dimensions in the near future. The conditional spatial priors currently train and test on differenttwo-dimensional slices of a three-dimensional data set, which will be replaced by training on multiplethree-dimensional data sets.

To measure the efficacy of the proposed method, we plan to validate the results of the three-dimensional implementation against manual segmentation, and against the results of Adaptive Seg-mentation with a stationary prior.

7

7 Acknowledgements

The authors would like to thank Alexandra Chabrerie, Polina Golland, Yoshio Hirayasu, Martha Shen-ton, Simon Warfield, for contributing data to this paper, and Polina Golland for the software used togenerate suface-renderings in this paper.

8

Figure 1: Left: Standard MAP classifier, EMSeg, EMSeg+MF, Middle: EM Weights, Right: EM+MF

9

Figure 2: Left: Standard MAP classifier, EMSeg, EMSeg+MF+Shape, Middle: EM Weights, Right:EM+MF+SP Weights

10

Figure 3: Left: knee reconstruction view 1 Right: knee reconstruction view 2

11

References

[1] J. Besag. On the statistical analysis of dirty pictures.RoyalStat, B-48(3):259–302, 1986.

[2] G. Borgefors. Distance transformations in digital im-ages. Computer Vision, Graphics, and Image Process-ing, 34:344–371, 1986.

[3] M.E. Brummer, R.M. Mersereau, R.L. Eisner, andR.R.J. Lewine. Automatic detection of brain contoursin MRI data sets. IEEE Transactions on Medical Imag-ing, 12(2):153–166, June 1993.

[4] H. Cline, W. Lorensen, R. Kikinis, and F. Jolesz.Three-Dimensional Segmentation of MR Images of theHead Using Probability and Connectivity. JCAT,14(6):1037–1045, 1990.

[5] L.D. Cohen and I. Cohen. Deformable models for 3dmedical images using finite elements and balloons. InCVPR92, pages 592–598, 1992.

[6] D. Collins, T. Peters, W. Dai, and A. Evans. Modelbased segmentation of individual brain structures frommri data. In R. Robb, editor, Proceedings of the FirstConference on Visualization in Biomedical Computing,pages 10–23. SPIE, 1992.

[7] T.F. Cootes, A. Hill, C.J. Taylor, and J. Haslam. Useof active shape models for locating structure in medicalimages. IVC, 12(6):355–365, July 1994.

[8] R.O Duda and P.E. Hart. Pattern Classification andScene Analysis. John Wiley and Sons, 1973.

[9] D. Geiger and F. Girosi. Parallel and deterministicalgorithms from mrfs: Surface reconstruction. PAMI,13(5):401–412, May 1991.

[10] S. Geman and D. Geman. Stochastic relaxation, gibbsdistributions, and the bayesian restoration of images.PAMI, 6(6):721–741, November 1984.

[11] K. Held, E. Rota Kopps, B. Krause, W. Wells, R. Kiki-nis, and H. Muller-Gartner. Markov random field seg-mentation of brain mr images. IEEE Transactions onMedical Imaging, 16:878–887, 1998.

[12] T. Kapur, P.A. Beardsley, and S.F. Gibson. Segmenta-tion of bone in knee mri using adaptive region growing.In MERL Technical Report, 1997.

[13] T. Kapur, P.A. Beardsley, S.F. Gibson, W.E.L. Grim-son, and W.M. Wells III. Model based segmentation ofclinical knee mri. In IEEE International Workshop onModel Based 3D Image Analysis, 1998.

[14] M. Kass, A.P. Witkin, and D. Terzopoulos. Snakes:Active contour models. IJCV, 1(4):321–331, January1988.

[15] S. Solloway, C. Hutchinson, J. Waterton, and C.J. Tay-lor. The use of active shape models for making thick-ness measurements of articular cartilage from mr im-ages. MRM, 37(11):943–952, November 1997.

[16] L.H. Staib and J.S. Duncan. Boundary findingwith parametrically deformable models. PAMI,14(11):1061–1075, November 1992.

[17] G. Szekely, A. Kelemen, C. Brechbuehler, andG. Gerig. Segmentation of 3d objects from mri volumedata using constrained elastic deformations of flexiblefourier surface models. In CVRMed95, pages XX–YY,1995.

[18] P. Teo, G. Sapiro, and B. Wandell. Segmenting cor-tical gray matter for functional mri visualization. InICCV98, pages 292–297, 1998.

[19] W.M. Wells III, R. Kikinis, W.E.L. Grimson, andF. Jolesz. Adaptive segmentation of mri data. IEEETransactions on Medical Imaging, 1996.

[20] S.C. Zhu, Y. Wu, and D. Mumford. Frame: Filters,random fields and maximum entropy: Towards the uni-fied theory for texture modeling. In CVPR96, pages696–693, 1996.

12