clustering on image boundary regions for deformable model segmentation joshua stough, stephen m....

1
Clustering on Image Boundary Regions Clustering on Image Boundary Regions for Deformable Model Segmentation for Deformable Model Segmentation Joshua Stough, Stephen M. Pizer, Edward L. Chaney, Manjari Rao Medical Image Display & Analysis Group, University of North Carolina at Chapel Hill 2. Problem: Boundary Region 2. Problem: Boundary Region Variability Variability 3. Our Approach: Build a Locally 3. Our Approach: Build a Locally Varying Template Varying Template 1. M-reps and Image 1. M-reps and Image Match Match 4. Results 4. Results 5. Future Research, Current 5. Future Research, Current Additions Additions Goal: Goal: Robust image template capturing the dominant intensity pattern at each object boundary position in a set of training images Surrounding anatomical objects affect local cross- boundary intensity profiles (fig. 3). Consistent context of object implies local profile types, e.g., grey-to-light, grey-to-dark, grey- dark-grey (fig. 4, 5). Object based coordinates, via m-reps, provide explicit correspondence between deformations of the same model (fig. 2). Target image I is a dense collection of local cross- boundary profiles (fig. 1, 4). The template is the collection of expected profiles. Image match is approximated by normalized correlation of I with . Under simplifying assumptions, maximizing correlation also maximizes - log p(I | m), the log probability of the model with respect to the target image. Build the Template from the Profile Types Given: A set of expected profile types—analytic or intuitive profiles; the normalized profiles from training images. 1. Cluster the training profiles. Bin them according to normalized correlation with the types (fig. 7). 2. Set the types to be the average of each bin. 3. Iterate 2 and 3 until few profiles change bins between iterations (fig. 6). Train the Profile Types Given: The trained profile types; the training profiles. For each point on the m-rep surface Consider the family of training profiles at that point (fig. 8). Record the normalized correlation of each profile to the types. Sum the responses for each type. Select for that point the type with the highest total score (fig. 10). Set the template to be the ordered collection of selected types (fig. 11, 12). Human kidney Left and right trained separately. 52 CT scans for training, 12 for testing. Improved average surface distance relative to expert segmentations in 31 of 48 possible comparisons versus Gaussian derivative template (fig. 13). Improvement in segmentation automation. Paper: Paper: Proceedings, 2004 IEEE International Symposium on Biomedical Imaging (ISBI ’04). Acknowledgments: Acknowledgments: Conceptual, algorithmic and code contributions from Tom Fletcher, Gregg Tracton, and Graham Gash. This work was funded by NIH-NIBIB P01 EB002779. Per-profile typical intensity reward (fig. 14). Target image normalization based on training data. Mixture model to improve generality, both correlative and covariance based (fig. 15). Improved geometric correspondence through minimizing intensity variance. Multiple intensity scales, blurring along the surface. Application to multi-object models (fig. 16). Fig. 1: Profile sample positio ns Fig. 3: What intensities are outside kidney? Axial CT slice Coronal CT slice Fig. 2: Atom grid providing object- based coordinates, and surface IN OUT Fig. 4: Cross-boundary profiles from a right kidney CT, with no particular order of the points Inside Outsid e Fig. 8: Profiles at a single point over many training cases Fig. 10: Respon-ses by case for the fig. 8 profiles, yielding light-to- dark (green) type Fig. 9: The three selectable profile types (see fig. 6). Fig. 11: Human kidneys colored according to the profile type selected at each point. Note the grey-to-light type where the liver often abuts. Fig. 12: Kidney profile choice relative to CT slice. Green: grey- to-dark, red: grey-to- light, blue: grey-dark- grey. Iteration 0 1 2 Fig. 6: Cluster centers (means) Fig. 7: The training profiles, grouped by cluster. The means and ± 2 are highlighted. Fig. 13: Segmentation results vs. template. Dark: Gaussian derivative; light: clustered Fig. 14: Histogram of profile average, over all positions and kidneys Fig. 15: Profiles vs first 2 principal components Fig. 16: Multi- object male pelvis model in CT: bladder, pubic bone, prostate, and rectum Left kidney Right kidney Fig. 5: Intensity profile variability

Post on 20-Dec-2015

220 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Clustering on Image Boundary Regions for Deformable Model Segmentation Joshua Stough, Stephen M. Pizer, Edward L. Chaney, Manjari Rao Medical Image Display

Clustering on Image Boundary RegionsClustering on Image Boundary Regionsfor Deformable Model Segmentationfor Deformable Model Segmentation

Joshua Stough, Stephen M. Pizer, Edward L. Chaney, Manjari RaoMedical Image Display & Analysis Group, University of North Carolina at Chapel Hill

2. Problem: Boundary Region Variability2. Problem: Boundary Region Variability

3. Our Approach: Build a Locally Varying Template3. Our Approach: Build a Locally Varying Template

1. M-reps and Image Match1. M-reps and Image Match

4. Results4. Results5. Future Research, Current Additions5. Future Research, Current Additions

Goal:Goal: Robust image template capturing the dominant intensity pattern at each object boundary position in a set of training images

• Surrounding anatomical objects affect local cross-boundary intensity profiles (fig. 3).

• Consistent context of object implies local profile types, e.g., grey-to-light, grey-to-dark, grey-dark-grey (fig. 4, 5).

• Object based coordinates, via m-reps, provide explicit correspondence between deformations of the same model (fig. 2).

• Target image I is a dense collection of local cross-boundary profiles (fig. 1, 4). The template is the collection of expected profiles.

• Image match is approximated by normalized correlation of I with . Under simplifying assumptions, maximizing correlation also maximizes -log p(I | m), the log probability of the model with respect to the target image.

Build the Template from the Profile TypesGiven: A set of expected profile types—

analytic or intuitive profiles; the normalized profiles from training images.

1. Cluster the training profiles. Bin them according to normalized correlation with the types (fig. 7).

2. Set the types to be the average of each bin.

3. Iterate 2 and 3 until few profiles change bins between iterations (fig. 6).

Train the Profile TypesGiven: The trained profile types; the training profiles.

• For each point on the m-rep surface

• Consider the family of training profiles at that point (fig. 8). Record the normalized correlation of each profile to the types.

• Sum the responses for each type. Select for that point the type with the highest total score (fig. 10).

• Set the template to be the ordered collection of selected types (fig. 11, 12).

• Human kidney

• Left and right trained separately.

• 52 CT scans for training, 12 for testing.

• Improved average surface distance relative to expert segmentations in 31 of 48 possible comparisons versus Gaussian derivative template (fig. 13).

• Improvement in segmentation automation.

Paper:Paper: Proceedings, 2004 IEEE International Symposium on Biomedical Imaging (ISBI ’04).Acknowledgments:Acknowledgments: Conceptual, algorithmic and code contributions from Tom Fletcher, Gregg Tracton, and Graham Gash. This work was funded by NIH-NIBIB P01 EB002779.

• Per-profile typical intensity reward (fig. 14).

• Target image normalization based on training data.

• Mixture model to improve generality, both correlative and covariance based (fig. 15).

• Improved geometric correspondence through minimizing intensity variance.

• Multiple intensity scales, blurring along the surface.

• Application to multi-object models (fig. 16).

Fig. 1: Profile sample positions

Fig. 3: What intensities are outside kidney?

Axial CT slice

Coronal CT slice

Fig. 2: Atom grid providing object-based coordinates, and surface

IN OUT

Fig. 4: Cross-boundary profiles from a right kidney CT, with no

particular order of the points

Inside Outside

Fig. 8: Profiles at a single point over many training cases

Fig. 10: Respon-ses by case for the fig. 8 profiles, yielding light-to-dark (green) type

Fig. 9: The three selectable profile types (see fig. 6).

Fig. 11: Human kidneys colored according to the profile type selected at each point. Note the grey-to-light type where the liver often abuts.

Fig. 12: Kidney profile choice relative to CT slice. Green: grey-to-dark, red: grey-to-light, blue: grey-dark-grey.

Iteration 0 1 2Fig. 6: Cluster centers (means)

Fig. 7: The training profiles, grouped by cluster. The means and ± 2 are highlighted.

Fig. 13: Segmentation results vs. template. Dark: Gaussian derivative; light: clustered Fig. 14: Histogram of

profile average, over all positions and kidneys

Fig. 15: Profiles vs first 2 principal components

Fig. 16: Multi-object male pelvis model in CT: bladder, pubic bone, prostate, and rectum

Left kidney Right kidney

Fig. 5: Intensity profile variability