a cooperative framework for segmentation of mri brain scans

30
1 A Cooperative Framework for Segmentation of MRI Brain Scans Laurence Germond a , Michel Dojat b , C. Taylor c , C. Garbay a a Laboratoire TIMC-IMAG, Institut Bonniot, Faculté de Médecine, Domaine de la Merci, 38706 La Tronche Cedex, France b Institut National de la Santé et de la Recherche Médicale, U438 - RMN Bioclinique, Centre Hospitalier Universitaire - Pavillon B, BP 217, 38043 Grenoble Cedex 9, France c Department of Medical Biophysics, University of Manchester, Stopford Building, Oxford Road, Manchester M139PT, England Address for correspondence: M. Dojat INSERM U438 - RMN Bioclinique Centre Hospitalier Universitaire - Pavillon B BP 217, 38043 Grenoble Cedex 9 FRANCE phone: 33 4 76 76 57 48 fax: 33 4 76 76 58 96 email: [email protected]

Upload: liglab

Post on 20-Jan-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

1

A Cooperative Framework for Segmentation of MRI Brain Scans

Laurence Germonda, Michel Dojatb, C. Taylorc, C. Garbaya

aLaboratoire TIMC-IMAG, Institut Bonniot, Faculté de Médecine, Domaine dela Merci, 38706 La Tronche Cedex, FrancebInstitut National de la Santé et de la Recherche Médicale, U438 - RMNBioclinique, Centre Hospitalier Universitaire - Pavillon B, BP 217, 38043Grenoble Cedex 9, FrancecDepartment of Medical Biophysics, University of Manchester, StopfordBuilding, Oxford Road, Manchester M139PT, England

Address for correspondence: M. Dojat

INSERM U438 - RMN Bioclinique

Centre Hospitalier Universitaire - Pavillon B BP217, 38043 Grenoble Cedex 9 FRANCE

phone: 33 4 76 76 57 48

fax: 33 4 76 76 58 96

email: [email protected]

2

Abstract:

Automatic segmentation of MRI brain scans is a complex task for two

main reasons: the large variability of the human brain anatomy, which limits

the use of general knowledge and, inherent to MRI acquisition, the artifacts

present in the images that are difficult to process. To tackle these difficulties we

propose to mix, in a cooperative framework, several types of information and

knowledge provided and used by complementary individual systems:

presently, a multi-agent system, a deformable model and an edge detector. The

outcome is a cooperative segmentation performed by a set of region and edge

agents constrained automatically and dynamically by both, the specific gray

levels in the considered image, statistical models of the brain structures and

general knowledge about MRI brain scans. Interactions between the individual

systems follow three modes of cooperation: integrative, augmentative and

confrontational cooperation, combined during the three steps of the

segmentation process namely, the specialization of the seeded-region-growing

agents, the fusion of heterogeneous information and the retroaction over slices.

The described cooperative framework allows the dynamic adaptation of the

segmentation process to the own characteristics of each MRI brain scan. Its

evaluation using realistic brain phantoms is reported.

Keywords: Multi-agent system, Cerebral Cortex, Active Shape Model.

3

1. INTRODUCTION

The development of reliable methods of segmentation is a central

prerequisite to exploit in deep information contained in Magnetic Resonance

Imaging (MRI) brain scans. In particular, the functional organization of the

cortical areas, as assessed by functional brain imaging, can only be visualized

and understood with a precise delineation of the cortical ribbon. Automated

procedures for the accurate cortex delineation rely on the precise segmentation

of the brain tissues. The automatic segmentation of MR images is a complex

task for two main reasons: the inter-subject variability of the human brain

anatomy that restricts the use of general knowledge and, inherent to the

acquisition process, image artifacts that are difficult to correct, such as partial

volume effects and non uniformity of the gray level intensities for a given

tissue, essentially due to inhomogenities of the excitation pulse (radio-

frequency field). As an example, the non-uniformity of the gray levels for gray

matter tissue (GM), within and between images, is shown on Figure 1. The

importance of these gray level variations motivates the use of an adaptative

segmentation process.

4

Figure 1. Variations of the mean gray level values for gray matter tissue.

This figure shows the variations of the mean gray level values for contiguous samples of gray matter(300 mm2) selected alongside the brain boundary as obtained via the deformable model. Each curvecorresponds to one subject. Three different acquisition sequences were used that correspond to the top,middle and bottom curves. The selected slices were all located at the AC-PC plane. The curvecorresponding to the canonical image used throughout the paper (see Fig. 5) is the second one from thebottom. The X-axis corresponds to the curvilinear coordinate alongside the deformable model boundary.Y-axis corresponds to the mean gray level values.

Many approaches have been proposed for MRI segmentation (for a review

see [5, 32]). We classify them in data-driven approaches and model-based

approaches. Although generally used independently, we consider the two

approaches as complementary, and propose to integrate them in a cooperative

framework to benefit from the advantages and specificities of each of them. In

order to design such a framework, we introduce three modes of cooperation

[12] integrative, augmentative and confrontational cooperation between the three

systems we have considered: a multi-agent system, a statistical deformable

model and an edge detector. The multi-agent system and the edge detector we

use are representative of data-driven approaches and the statistical deformable

model of model-based approaches. Each system represents a specific source of

information about the image contents. The modes of cooperation combine

(integrative cooperation), distribute (augmentation cooperation) and oppose

5

(confrontational cooperation) different solutions to a given problem.

Cooperation is used at each step of the segmentation process namely:

specialization of refine seeded-region-growing agents, fusion of heterogeneous

information and retraction over slices.

• The specialization step, based on integrative and augmentative

cooperation, customizes two sets of specialized agents that segment

successively gray matter and white matter (WM). In addition to general

knowledge about brain anatomy and brain scans, the statistical 2D deformable

model provides valuable information on the brain boundary to position the

GM agents that allow eventually the localization of WM agents.

• The fusion step relies on augmentative and confrontational cooperation. It

leads to the refinement of the detection of the brain boundary from both the

region classification obtained via the multi-agent system and the edge

detection provided by the edge detector. The refinement process is based on

several specific agents working concurrently along the brain boundary.

• The retroaction step uses integrative and confrontational cooperation. It

allows the improvement of the current segmentation and the transmission of

information to initialize the segmentation of contiguous slices. Although, we

use a 2D deformable model, this step achieves a 2.5D segmentation.

In the remainder of the paper, we briefly present in section 2 a state of the art

on MRI brain scans segmentation. We detail, in section 3, the different systems

currently present in our framework. Section 4 describes the complete

segmentation process to enlighten the three modes of cooperation we propose.

6

Section 5 is devoted to the evaluation of the system performance in using

phantoms provided by the Montreal Neurological Institute (MNI). Then, in

section 6 we discuss several aspects of our work for segmentation of MRI brain

scans and point out work under development.

2. STATE OF THE ART

We separate the numerous approaches proposed in the literature in two

main categories depending whether they use models (model-based

approaches) or mainly the gray levels present in the images (data-driven

approaches). We note that accordingly to this categorization, region-based

classification techniques rather use local models (data-driven approaches) and

implicit or explicit knowledge, whilst edge-based detection methods rely on

global models and implicit knowledge.

2.1. DATA DRIVEN APPROACHES

Classification processes, such as bayesian classification or Markov

random fields, are the main representative of data-driven approaches. They

have been widely used for region segmentation and interpretation of cerebral

tissues [11, 14, 17, 23, 31]. These approaches can be used to provide volume

measurements of the brain tissues, pixel labeling and also research of areas of

interest. The main difficulties encountered in using these approaches come

from the artifacts inherent to the image acquisition procedure. Specific

preprocessing techniques have been proposed [22, 26] to cope with the non

uniformity of the gray level intensities for a given tissue that is a critical point

7

in these approaches. Moreover, a preprocessing step is also often required to

isolate the brain and lower the number of classes to detect. Due to the local and

systematic nature of the treatments used, these approaches are easily

extendable in 3D.

2.2. MODEL-BASED APPROACHES

Modeling is mainly related to the introduction of a priori information or

knowledge about edges and brain structures. Models provide a robust

localization of the structures of interest without requiring a preprocessing step.

Among the set of model-based approaches, we can quote those based on

snakes [4, 9, 10, 21, 24] and those based on a statistical processing of a training

images set [7, 27, 29]. Models use global constraints that limit their deformation

capability and thus their ability to describe local deformations. As a

consequence, some structures present in brain images, highly variable in shape,

appearance and topology, are difficult to model with a sufficient precision to

be useful. The design of 3D models is a challenging task since the majority of

current available models are built in 2D.

2.3. COOPERATIVE APPROACHES

In front of the complexity of MRI brain scans segmentation, several

approaches have been proposed [15, 28, 30] to combine several autonomous

modules, each responsible of a part of the task. Although the method obtained

is more robust than its individual components, the cooperation between the

modules is still embryonic: each sub-task is performed sequentially and its

result is used to feed the following sub-task. Following [2], we propose to use

8

more fully the concept of cooperation, thus taking advantage of the

complementary of different systems for the generation of constraints and

information exploited by each system. We have studied three modes of

cooperation to facilitate the integration of heterogeneous sources of

information, i.e. anatomical information provided by model-based approaches,

and self-contained image information provided by data-driven based

approaches. We assume that a framework based on the multi-agent paradigm

is well adapted to operationalize these modes of cooperation.

3. THE INDIVIDUAL SYSTEMS

Three individual systems are currently used in our framework, which

generate and use different types of information and knowledge (Figure 2).

• The multi-agent system that segments MRI scans, is composed of seeded-

region-growing and edge detection autonomous entities called agents. It has a

pivotal role in our framework, the other systems playing the role of

information providers via our cooperation modes. It processes local gray level

information through a priori domain knowledge and a statistical model whose

parameters are acquired at run time.

• The deformable model provides spatial and topological information based

on an off-line learned statistical model adaptable to the image to segment.

• The edge detector provides edges based on gray level intensity

information and a priori domain knowledge.

9

Deformable Model

Region Agent

Edge Agent

Multi-agent System

Speciali-zation

Fusion

Retro-action

Edge Detector

edgeregion

refinedenveloppe

Tc2

Tc1

Cc1-2

enveloppe

integrative & augmentative

augmentative & confrontational

integrative & confrontational

Figure 2. A global view of the framework and information flow

The three systems used in our framework are represented by rectangles. Dashed rectangles show theresults provided. Each triangle indicates a step in the global segmentation process. For each step, themodes of cooperation used are indicated. The arrows indicate the information flow and the dashedarrows the introduction of domain knowledge Tc1, Tc2, Cc1 and Cc2 (see Section 3.1 for details).

Several types of external information or domain knowledge are required to

correctly segment MRI brain scans. Following [20], we explicitly mention the

knowledge used by the multi-agent system and the edge detector to perform

their task. We assume that similar knowledge is implicit in numerous brain

scan segmentation systems.

3.1. COMMONLY USED DOMAIN KNOWLEDGE FOR MRIBRAIN SCANS SEGMENTATION

We classify the knowledge used following two criteria, namely a topology

criterion, and a classification criterion

• Topology-criterion 1 (Tc1) : WM ∈ GM i.e. White matter is surrounded

by gray matter.

10

• Topology-criterion 2 (Tc2) : If dst(p, Model) ≤ 2 Then p ∈ GM | CSF

GM can be modeled as a ribbon of approximately 2 mm width. Thus, if a pixel

p, (1 mm2 with the resolution used) is less than two pixels far from the model

boundary, it belongs to either GM or cerebro-spinal fluid (CSF).

• Classification-criterion 1 (Cc1): E/M{Matter} ⇒ {sup(GM), mean(WM}

where Matter = {GM, WM}

We consider that the Estimation/Maximisation (E/M) algorithm on three

classes is sufficient to calculate the higher value of GM i.e. sup(GM) and the

mean of WM i.e. mean(WM).

• Classification-criterion 2 (Cc2) : If p ∈ GM Then I(p) < mean(WM)

If p ∈ WM Then I(p) > sup(GM)

where I(p) is the gray-level of pixel p in the image. This criterion depends on

the image acquisition parameters and is valid for the T1 weighted acquisition

sequence we use.

3.2. THE MULTI-AGENT SYSTEM

Presently, two classes of agents have been defined: region agents

specialized for GM or for WM segmentation and edge agents specialized for

the brain boundary detection (Figure 3). The agents are autonomous and

simple entities implemented as C++ objects, which communicate through a

shared environment and are concurrently executed under control of a

scheduler. RegionAgent class allows the definition of the initial localization in

the image (seed) and the behavior (parameterSpecialization) of pixel growing

11

agents. EdgeAgent class implies the specification of the bounds (bounds) of the

edge and the parameters of the cost function (costFunction). Each agent is

locally specialized and initialized at runtime (see Section 4).

Agent ClassidentificationNumber

allocationTime

RegionAgent Classseed, pixelCandidates

parametersOfSpecialization

EdgeAgent Classbounds

costFunction

Figure 3. Agents hierarchy

Three agent classes compose the hierarchy that are linked by means of isa relation (arrows). Theinstance variables for each class are indicated.

3.3. THE DEFORMABLE MODEL

This system is based on the method detailed in [7] and is composed of a

deformable model of the brain envelope. Searching for the model in our system

allows the automatic detection of the brain boundary and does not require any

pre-processing. A typical search for the brain envelope is shown in Figure 4.

The model was constructed from a set of 19 scans in the AC-PC plane. 105

landmarks were manually put to delineate the brain contour. The result

provided by the model allows the derivation of spatial and topological

information about the localization and local characteristics of the brain tissues.

This information is then used by the multi-agents system. Note, however, that

the cortical boundary is only roughly detected, a precise delineation of the

brain sulci remaining required.

12

Figure 4. Automatic detection of the brain envelope.

The figure indicates the final position of the model after deformation to fit to the characteristics of thenew image not present in the initial training set.

3.4. THE EDGE DETECTOR

The edge detector is a self consistent system detailed in [3]. It has been

chosen for its ability to provide a precise and robust localization of the

boundaries for all the edges in a given image. Its principle is to consider the

edge pixels as different to the region pixels. As a consequence, the image is

magnified by a double resolution so that region pixels can be easily separated

from edge pixels. The edge detection operator applied to the image is based on

a B-spline interpolation of the signal. It works according to a principle of

hysteresis thresholding controlled by three parameters (smoothing, high

threshold and low threshold). These parameters are fixed manually to visualize

fine structures. Empirically, the values (2, 4 and 8) are frequently used with our

collection of MRI brain scans. This detector also implicitly uses knowledge

criteria as described in section §3.1. An example of a source image and the

corresponding result provided by the edge detector is presented in Figure 5.

13

The same source image is used as a running example for the rest of the paper to

illustrate all the steps of our system.

Figure 5. Edge detection

Left: The figure shows the image where edges have to be detected. This figure is used as a runningexample for the rest of the paper to illustrate the main aspects of the global segmentation process.

Right: The edges obtained by the edge detector.

4. COMPLETE SEGMENTATION PROCESS

In our framework, cooperation is the central mechanism to achieve the

segmentation. The three modes of cooperation we introduce are detailed in

observing the complete process of segmentation at work.

4.1. SPECIALIZATION OF REGION AGENTS

The use of a multi-agents system for MRI brain scans segmentation is a

way to provide local adaptativity to gray-level inhomogeneities because each

agent can be specialized depending on its position and local environment. The

main difficulty is the automatic determination of each agent specialization [20].

This is achieved thanks to integrative cooperation between the deformable

model and the region agents.

14

4.1.1. DETERMINATION OF THE AGENT POSITION

The position and number of GM agents are automatically constrained from

the detection of the brain boundary by the deformable model. The principle is

to place an agent every n1 pixels alongside the brain boundary and to move

towards the inside of the brain, in a direction normal to the local boundary, for

a distance of n2 pixels. In our current application, n1 and n2 are equal to 5 and 2

respectively and, based on the domain knowledge Tc2 (see §3.1), we assume

that the selected pixels are in GM. The position of the WM agents is defined

after GM segmentation. This allows the definition of an area of interest located

inside the boundary of the brain detected by the deformable model and not in

previously segmented GM (domain knowledge Tc1, §3.1). In this area, we have

adopted a systematic principle of positioning: one agent every n3 pixels

alongside both directions of the image. n3 is set to 10 in the current application.

For GM agents as well as for the WM agents, criteria Cc1 and Cc2 (see §3.1)

allow the control of the validity of each selected agent position. The unreliable

agents are suppressed. Following the augmentative cooperation, the region

segmentation task is distributed to the set of GM and WM agents. Each agent,

locally adapted to a sub-part of the image, processes at best its partial task of

segmentation. Due to their concurrent launching, each agent is constrained at

run-time by the agent working in its neighborhood. The pixel classification is

based on a first-come first-served principle.

4.1.2. DETERMINATION OF THE AGENT BEHAVIOR

The behavior of seeded-region-growing agents is determined by two

parameters (σi, µi). The pixels candidates for aggregation to the region under

15

construction are selected at the current region border in 4-connexity and must

verify equation (1) where Ip represents the gray-level of pixel p and α is a fixed

coefficient, currently set to 1.5.

µi -σi x α ≤ I(p) ≤ µi + σi x α (1)

In order to calculate automatically (σi, µi), samples of tissue are selected and

processed following the criterion Sc1 :

• Sampling-criterion 1 (Sc1): size(Window{GM}) = 100*3 pixels

For each gray-matter agent, we select a band of 100 pixels length and 3 pixels

width (i.e. 300 mm2) to perform a local E/M on the gray-levels for both CSF

and GM (domain knowledge Tc2, §3.1). The mean and the variance of the

gaussian curve with the highest mean (domain knowledge Cc2, §3.1) is used to

define the behavior of the considered agent.

For the white matter agents, the criterion Sc2 is used:

• Sampling-criterion 2 (Sc2) : size(Window{WM}) = 9*9 pixels

For each white-matter agent, we select a square sample of 9*9 pixels (i.e. 81

mm2) to calculate the local mean and variance of the white matter. Only the

non segmented pixels are considered and the mean and the variance of the

resulting sample is used to define the behavior of the corresponding WM

agent.

4.2. FUSION OF INFORMATION

Based on the use of global constraints, the boundary determined by the

deformable model is not accurate enough to describe variable structures such

16

as convolutions or sulci openings. Consequently, the delineation of GM/CSF

border performed by the multi-agent system can also be partly erroneous. The

goal of this step is to build a more precise brain contour based on augmentative

and confrontational cooperation between the multi-agent system and the edge

detector. A new edgeAgent class is defined for this purpose, in using

complementary information about regions provided by the multi-agent system

and about edges provided by the edge detector.

4.2.1. EDGE AGENTS

The definition of the edge agents has been guided by two requirements:

• Conservation of the topology of the boundary detected by the deformable

model (closed boundary).

• Confrontation of information acquired in previous stages by the two

independent information sources.

To fit these requirements, a dynamic programming approach (A* algorithm) is

used to define the behavior of this agent class. Accordingly to this behavior, the

edge agents search for the best path between two extremities placed on the

brain boundary as assessed by the deformable model. This guaranties the

topology conservation. The cost function allows the search for an optimal path

between the two extremities.

4.2.2. COHERENCE OF THE INFORMATION SOURCES

To be confronted the two information sources have to be coherent. The

coherence is approximately correct as shown in Figure 6 where edges are

mainly located at the region borders. However, it can be observed that regions

17

at the GM/CSF interface do not always reach the corresponding edges. This is

due to the constraints applied on the region growing by the boundary of the

deformable model. Again, this justifies the need for local corrections applied to

the boundary.

Figure 6. Superposition of the region segmentation provided by the multi-agent system with theedge information provided by the edge detector.

The edge detector and the multi-agent system provide binary information.

Because a direct matching would be too restrictive, we have chosen to combine

these information in a fuzzy way. The fusion is performed on the distance

maps corresponding, for regions, to the distance from each pixel to the detected

GM/CSF interface and for edges, to the distance from each pixel to the

detected edge. Chamfer distances (3,4) are computed with 3x3 masks. Typical

distance maps are presented in Figure 7.

18

Figure 7 : Distance maps

Left: The figure shows, based on the information provided by the edge detector, the distance fromeach pixel to the edges detected.

Right: The figure shows, based on the region classification provided by the multi-agent system, thedistance from each pixel to the interface between gray matter and the cerebro-spinal fluid.

4.2.3. DETERMINATION OF THE COST FUNCTION

In the context of the fusion of fuzzy information, we have been looking for

an operator that would allow to take advantage of the coherence and

complementarity that exists between the information sources. Among the

operators proposed in [1], we have selected an operator whose behavior does

not vary with the values to be combined, qualified as "Context Independent

Constant Behavior", and that produces a compromise between information

sources. Because the sources we fusion do not have the same amount of

precision (the result of the edge detector is more precise in terms of localization

than the result of the region segmentation), we introduce a weighted sum of

the distance values. Such an operator has been proposed in [13] in a similar

context.

The operator is indicated in Equation 2, where fi(p) represents the cost

associated to pixel p, the couple (β,γ), the weights, IInterface(p), the distance to the

19

GM/CSF interface based on region segmentation, and IEdge(p) the distance to

the edge provided by the edge detector.

fi(p) = (βIInterface(p) + γ IEdge(p)) /(β+γ) (2)

In practice, the weights have been set to (1,3) to favor the edge detector.

Figure 8 shows the result obtained after the introduction of the edge agents

guided by our fusion operator.

Figure 8: Brain boundary detection as performed by the edge agents driven by fusion of fuzzyinformation.

4.3. RETROACTION OVER SLICES

The requirement of topology conservation described in section 4.2 allows to

conceive a global retroaction on the segmentation process. After the fusion

step, the edge detected by the edge agents can be used as a replacement of the

initial boundary detected by the deformable model. Two levels of retroaction

can be in fact envisaged:

• Retroaction on the same slice. In presence of a brain geometrically very

different from those present in the initial training set, the model can be placed

20

inside the real boundary. In such a case, some GM pixels are missed. The

retroaction on the same slice detects those pixels to refine the initial contour.

This step uses confrontational cooperation: the initial model-based information

is replaced by the newly computed one, judged more accurate.

• Retroaction on an adjacent slice in the image volume. Currently we do not

dispose of a 3D statistical deformable model. To overcome the intrinsic 2D

limitation, we exploit the capacities of our system to propagate information to

adjacent slices in the image volume. Following the detection of the brain

boundary by the edge agents at a given level in the image volume, and after at

least one retroaction on the same slice, we can transmit the delineated brain

contour to an adjacent slice. This brain contour is then used to start the

segmentation process for this new slice, instead of the one provided by the

deformable model, that was built from a set of slices at another plane level. The

hypothesis is that the brain geometry is almost constant on adjacent slices. A

similar principle of information transmission has been proposed in [15] to

successively initialize snakes through an image volume. This step uses the

integrative cooperation: a newly computed information is transmitted to

constraint the next steps of the global segmentation process.

5. EVALUATION

Because it is impossible to establish ground truth with real images, we

have used, to perform an accurate evaluation of our framework, the realistic

digital brain phantom provided by the Montréal Neurologic Institute

21

(http://www.bic.mni.mcgill.ca/brainweb) and largely used for validation

purposes [6].

5.1. VALIDATION OF REGION SEGMENTATION

Following [25], we have used 3 coefficients to quantify the quality of our

segmentation. These coefficients are based on the classic notions of True

Positives (TP), False Positives (FP), True Negatives (TN) and False Negatives

(FN) and are described below:

• Building Detection Percentage: BDP = TP/(TP+FN). It represents the

amount of pixels correctly classified; the ideal value is 1.

• Branching factor: BF = FP/TP. It quantifies the over detection; the ideal

value is 0.

• Quality Percentage: QP = 100*TP/(TP+FP+FN). It represents the overall

quality of the system; the ideal value is 100.

Table 1 indicates the results obtained for six slices with fusion of information

and without retroaction. Table 2 indicates for the same slices the results

obtained using retroaction.

Table 1: Results of the segmentation for 6 adjacent slices without retroaction

Slice BF GM BDP GM BF WM BDP WM QP%

1 0.11 0.96 0.01 0.92 96.3

2 0.12 0.97 0.01 0.92 96.43

3 0.10 0.96 0.02 0.94 96.64

4 0.11 0.96 0.02 0.93 96.4

22

5 0.12 0.96 0.02 0.94 96.23

6 0.11 0.95 0.02 0.94 96.17

Mean 0.11 0.96 0.016 0.93 96.36

SD ±0.007 ±0.006 ±0.005 ±0.01 ±0.17

Abbrevations: GM = gray matter, WM = white matter, BF = branching factor,BDP= building detection percentage, QP = quality percentage, SD = standarddeviation

Table 2: Results of the segmentation for 6 adjacent slices using retroactionstarting from slice 1.

Slice BF GM BDP GM BF WM BDP WM QP%

1 0.12 0.96 0.01 0.93 96.3

2 0.12 0.96 0.01 0.92 96.33

3 0.11 0.95 0.01 0.93 96.25

4 0.11 0.95 0.02 0.93 96.24

5 0.10 0.95 0.02 0.94 96.19

6 0.11 0.94 0.02 0.94 96.94

Mean 0.11 0.95 0.015 0.93 96.37

SD ±0.007 ±0.007 ±0.005 ±0.007 ±0.28

Abbrevations: GM = gray matter, WM = white matter, BF = branchingfactor, BDP= building detection percentage, QP = quality percentage, SD =standard deviation

5.2. VALIDATION OF EDGE DETECTION

The index, expressed in %, used to validate our edge detection is:

true edge pixels/(true edge pixels + false edge pixels)

True edge pixels were accessed from the digital phantom brain.

Table 3 indicates the values of this index, when the segmentation was

performed without fusion (deformable model only) and with fusion when (β,γ)

was equal to (3,1).

23

Table 3: Evaluation of the contour adequacy for four images.

Two methods are compared, the model-based detection and the fusion-based detection with (β, δ) =(3,1)

(%) = true edge pixels/(true edge pixels + false edge pixels)

Image Model Only(%)

Fusion A*(3, 1)(%)

1 0.73 ± 0.02 0.88 ± 0.06

2 0.73 ± 0.02 0.9 ± 0.01

3 0.70 ± 0.03 0.82 ± 0.02

4 0.6 ± 0.06 0.71 ± 0.04

6. DISCUSSION

In the context of MRI brain scan segmentation, we propose a framework

to operationalize cooperation between several heterogeneous sources of

information. Cooperation is often restricted 1) to the sequential ordering of

specialized techniques responsible of a part of the segmentation process [15,

28, 30] or 2) to the exchange of low level information contained in the images

(gray level, gradient or variance) [19]. Centered on a multi-agent system, our

framework introduces three modes of cooperation, integrative, augmentative

and confrontational. These modes allows the mixing of heterogeneous

information (model based or data driven) to dynamically constraint pixel

growing agents. The results obtained (see Tables 1 and 2) are encouraging, the

mean quality percentage is equal to 96%, and demonstrate the interest of the

approach. The branching factor indicates that GM is over segmented. Because

GM is firstly segmented and that no reclassification of pixels has been

24

introduced, WM is then under segmented. During the retroaction process, the

transmission of information is reliable (see Table 2). However, due to the

efficiency of the 2D statistical deformable model, the interest of the retroaction

is not demonstrated here.

The design of this model requires considerable user intervention with

the manual placement of landmarks on the brain contours that constitute the

training set. Its extension in 3D raises theoretical problems. Techniques based

on morphological operations, less powerful but sufficient for a roughly brain

contour isolation, could advantageously replaced the deformable model. In this

case, retroaction could be an efficient technique to refine the initial contour.

The fusion operator, which favors non interpreted information provided by the

edge detector, improves the edge detection compared to the use of the

deformable model only (see Table 3). This directly demonstrates the benefit of

the confrontational mode of cooperation. In preliminary experiences, GM and

WM agents were launched in the image without topological constraints and

poor results were obtained. The integrative mode of cooperation between the

deformable model and the multi-agents system was central for the

improvement of the segmentation process. Finally, for the augmentative mode

of cooperation, an optimal relatively stable number of agents should be

introduced over which no gain is obtained.

Clearly, each individual component we currently integrate in our

framework can be improved. The choice of the Estimation/Maximization

algorithm for the calculation of the behavior of each agent is not fully

25

satisfying. Thanks to the modular construction of the framework, more

sophisticated statistical methods could be easily introduced and tested.

The behavior of the edge agents is based on the A* algorithm to

maintain a topology of closed boundary during the retroaction process.

Contrary to the deformable model, these agents are not constrained by global

deformations and then are more capable of delineating brain convolutions. No

rigidity constraints are applied as it is the case with snakes, energy-minimizing

elastic curves efficient to localize edges [16]. However, their extension in 3D is

problematic. Approaches for 3D brain structures modeling [18, 24] could be

explored.

Currently, agents communicate via shared memory. To reinforce the

cooperation, direct synchronous or asynchronous communication between the

agents have to be introduced. Interactions between agents are useful to

dynamically control the segmentation process and to take into account of the

partial volume effects inherent to MRI acquisition. The segmentation of gray

matter and white matter could then be performed concurrently, instead of

sequentially, lowering over and under segmentation by dynamic confrontation

of agents results. Note that contrary to our approach, some authors [8, 28]

choose, essentially for topological reasons and homogeneity of WM compared

to GM, to firstly segment WM and then start the GM segmentation on the

surface of the WM.

To conclude, the next step of our work is the extension of the framework

to 3D segmentation in a fully distributed environment.

26

ACKNOWLEDGMENTS: Laurence Germond has a PhD grant from the

French Ministry of Research and Technology. Financial support from the

European Commission (ALLIANCE project) is gratefully acknowledged. We

wish to thank Professor Alain Chehikian for his stimulating advice and for

providing us with his software for edge detection.

7. REFERENCES

[1] Bloch I. Information combination operators for data fusion: A comparative

review with classification, IEEE Trans. Systems, Man, Cybernetics. 1996;26:52-

67.

[2] Bozma I and Duncan JS. A game theoretic approach to integration of

modules, IEEE Trans. Patt. Anal. Mach. Intell. 1994;16:1074-1086.

[3] Chehikian A. Filtres récursifs pour l'estimation du gradient et de la

détection des contours par interpolation de splines, T.S. 1997;14:29-42.

[4] Chiou G and Hwang JN. A neural network-based stochastic active contour

model (nns-snake) for contour finding of distinct features, Im. Vis. Comput.

1995;4:1407-1416.

[5] Clarke LP, Velthuisen RP, Camacho MA, Heine JJ, Vaidyanathan M, Hall

LO, Thatcher RW and Silibiger ML. MRI segmentation: Methods and

applications, M.R.I. 1995;13:343-368.

[6] Collins D, Zijdenbos A, Kollokian V, Sled J, Kabani N, Holmes C and Evans

A. Design and construction of a realistic digital brain phantom, IEEE Trans.

Med. Imag. 1998;17:463-468.

27

[7] Cootes TF, Hill A, Taylor CJ and Haslam J. The use of active shape models

for locating structures in medical images, Im. Vis. Comput. 1995;12:335-365.

[8] Dale A, Fischl B and Sereno M. Cortical surface-based analysis I:

segmentation and surface reconstruction, NeuroImage. 1999;9:179-194.

[9] Davatzikos C and Prince JL. An active contour model for mapping the

cortex, IEEE Trans. Med. Imag. 1995;14:65-80.

[10] Duta N and Sonka M. Segmentation an interpretation of MR brain images,

IEEE Trans. Med. Imag. 1998;17:1049-1062.

[11] Held K, Kopps ER, Krause BJ, Wells WM, Kikinis R and Muller-Gartner

HW. Markov random field segmentation of brain MR images, IEEE Trans.

Med. Imag. 1997;16:878-886.

[12] Hoc J. Supervision et contrôle de processus: La cognition en situation

dynamique. PUF, Grenoble: 1996.

[13] Huet-Guillemot F, Fusion d'images segmentées et interprétées. Application

aux images aériennes, PhD Thesis, Cergy Pontoise (Fr), 1999.

[14] Jaggi C, Segmentation par méthode markovienne de l'encéphale humain

par résonance magnétique, PhD Thesis, Caen University, 1998.

[15] Kapur T, Eric W, Grimson L, Wells WM and Kikinis R. Segmentation of

the brain tissue from magnetic resonance images, Med. Im. Anal. 1996;1:109-

127.

[16] Kass M, Witkin A and Terzopoulos D. Snakes: Active contour models, Int.

J. Comput. Vision. 1988;Jan:321-331.

28

[17] Laidlaw DH, Fleischer KW and Barr AH. Partial-volume bayesian

classification of material mixtures in MR volume data using voxel histograms,

IEEE Trans. Med. Imag. 1998;17:74-86.

[18] Le Goualher G, Procyk E, Collins D, Venugopal R, Barillot C and Evans A.

Automated extraction and variability analysis of sulcal neuroanatomy, IEEE

Trans. Med. Imag. 1999;18:206-217.

[19] Lecornu L, Roux C and Jacq J. Suivi bilatéral des structures vasculaires

dans les images d'angiographie et reconstruction 3D des branches principales

du réseau vasculaire coronarien à partir de deux projections angiographiques,

T.S. 1998;15:395-410.

[20] Liu J and Tang YY. Adaptative image segmentation with distributed

behavior-based agents, IEEE Trans. Patt. Anal. Mach. Intell. 1999;21:544-551.

[21] McInerney T and Terzopoulos D. Deformable models in medical images

analysis: a survey, Med. Im. Anal. 1996;1:91-108.

[22] Meyer CR, Bland PH and Pipe J. Retrospective correction of intensity

inhomogenities in MRI, IEEE Trans. Med. Imag. 1995;14:36-41.

[23] Rajapakse J, Giedd J and Rapoport J. Statistical approach to segmentation

of single-channel cerebral MR images, IEEE Trans. Med. Imag. 1997;16:176-186.

[24] Sandor S and Leahy R. Surface-based labeling of cortical anatomy using a

deformable atlas, IEEE Trans. Med. Imag. 1997;16:41-54.

29

[25] Shufelt JA. Performance evaluation and analysis of monocular building

extraction from aerial imagery, IEEE Trans. Patt. Anal. Mach. Intell.

1999;21:311-326.

[26] Sled JG, Zujdenbos AP and Evans A. A non parametric method for

automatic correction of non uniformity in MRI data, IEEE Trans. Med. Imag.

1998;17:87-97.

[27] Székely G, Brechbuler C, Kelemen A and Gerig G. Segmentation of 2d and

3d objects from MRI volumes using constrained elastic deformations of flexible

Fourier contour and surface models, Med. Im. Anal. 1996;1:19-34.

[28] Teo PC, Sapiro G and Wandell BA. Creating connected representations of

cortical gray matter for functional MRI visualization, IEEE Trans. Med. Imag.

1997;16:852-863.

[29] Vaillant M and Davatzikos C. Finding parametric representation of the

cortical sulci using an active contour models, Med. Im. Anal. 1996;1:295-315.

[30] Warfield S, Dengler J, Zaers J, Guttmann CR, Wells WMr, Ettinger GJ,

Hiller J and Kikinis R. Automatic identification of gray matter structures from

MRI to improve the segmentation of white matter lesions, J Image Guid. Surg.

1995;1:326-38.

[31] Wells WL, Grimson L, Kikinis R and Jolesz FA. Adaptative segmentation

of MRI data, IEEE Trans. Med. Imag. 1996;15:429-442.

[32] Zijdenbos AP and Dawant BM. Brain segmentation and white matter

lesion detection in MR images, Crit. Rev. Bio. Eng. 1994;22:401-465.

30