· acknowledgments first of all, my deepest thanks are due to almighty god, the merciful, the...

158
IMAGE LABELING BY ENERGY MINIMIZATION WITH APPEARANCE AND SHAPE PRIORS By Asem Mohamed Ahmed Ali B.Sc. 1999, M.Sc. 2002, EE, Assiut University A Dissertation Submitted to the Faculty of the Graduate School of the University of Louisville in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Department of Electrical and Computer Engineering University of Louisville Louisville, Kentucky May 2008

Upload: others

Post on 17-Mar-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

IMAGE LABELING BY ENERGY MINIMIZATION WITHAPPEARANCE AND SHAPE PRIORS

By

Asem Mohamed Ahmed AliB.Sc. 1999, M.Sc. 2002, EE, Assiut University

A DissertationSubmitted to the Faculty of the

Graduate School of the University of Louisvillein Partial Fulfillment of the Requirements

for the Degree of

Doctor of Philosophy

Department of Electrical and Computer EngineeringUniversity of Louisville

Louisville, Kentucky

May 2008

Page 2:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest
Page 3:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

IMAGE LABELING BY ENERGY MINIMIZATION WITHAPPEARANCE AND SHAPE PRIORS

By

Asem Mohamed Ahmed AliB.Sc. 1999, M.Sc. 2002, EE, Assiut University

A Dissertation Approved on

by the Following Reading and Examination Committee:

Aly A. Farag, Ph.D., Dissertation Director

Thomas L. Starr, Ph.D.

Tamer Inanc, Ph.D.

Mohamed N. Ahmed, Ph.D.

Prasanna Sahoo, Ph.D.

Georgy L. Gemil’farb, Ph.D.

ii

Page 4:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

DEDICATION

To the greatest women in my life, my mother, my wife, and my daughter.

& to the soul of my father.

iii

Page 5:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

ACKNOWLEDGMENTS

First of all, my deepest thanks are due to Almighty God, the merciful, the

compassionate for uncountable gifts given to me.

I would like to express my deepest gratitude to my advisor, Prof. Aly A.

Farag, for giving me the opportunity to be a member in his research group, for his

continuous encouragement, and for his support over the course of this work. He

provided a very rich working environment with many opportunities to develop

new ideas, work in promising applications, get experience in diverse areas, and

meet well-known people in the field.

I would like to thank Prof. Thomas L. Starr for giving me the opportunity

to be a member in his research group in the face recognition project, and for useful

discussions.

I would like to thank Prof. Georgy Gimel’farb for useful discussions and

assistance. He has never hesitated to share his experience in Markov Gibbs random

field, image processing and computer vision fields. He has been a good teacher.

I would like to thank Dr. Tamer Inanc, Dr. Mohamed Ahmed, Prof. Prasanna

Sahoo, and Dr. Moumen Ahmed for agreeing to be on my dissertation committee,

for the useful consultation and the fruitful discussions.

I would like to thank Dr. Ayman El-Baz for useful discussions and his assis-

tance in publishing in respected conferences. He has never hesitated to share his

experience in Markov Gibbs random field, image processing. He has been a good

teacher and a great friend.

I would like to thank all the members of Computer Vision and Image Pro-

cessing Laboratory at University of Louisville, both past and present. Special

thanks to Mr. Mike Miller for his continuous dedication to help and for his sup-

iv

Page 6:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

port in hard times. Also, I would like to thank Mr. Chuck Sites for his valuable

technical help during the work in the lab projects.

Very special thanks to my family for their encouragement and support,

without which this dissertation and research would not have been possible. My

deepest gratitude to my mother, sisters, my brother, and lovable daughter Menah.

Finally, words cannot describe how I am indebted to my mother and my wife for

their pain, suffering and sacrifices made during the journey of this study.

v

Page 7:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

ABSTRACT

IMAGE LABELING BY ENERGY MINIMIZATION WITH APPEARANCE AND

SHAPE PRIORS

Asem Mohamed Ahmed Ali

April, 14, 2008

This work addresses modeling and analysis of images, in particular, label-

ing problems applied to image segmentation and restoration. The objective of this

work is to develop accurate mathematical models combining image appearance

(i.e., pixel intensities and spatial interaction between the pixels) and shape infor-

mation in order to describe objects-of-interest in the images. The intensity model

estimates the marginal density for each class in the image under consideration. A

new unsupervised technique based on maximizing a derived joint likelihood func-

tion is proposed to model these marginal densities by Gaussian distributions. The

estimation of the new technique is refined by adding Gaussian components with

sign alternate using the modified expectation maximization algorithm [1]. Spa-

tial interaction that describes the relation between pixels in each class is modeled

using a Markov-Gibbs Random Field (MGRF) with Potts prior. The Gibbs poten-

tial is chosen to be asymmetric, which provides more chances to guarantee that

the Gibbs energy function is submodular, so it can be minimized using a stan-

dard graph cuts approach in polynomial time. Unlike conventional approaches,

the parameter of the proposed model is analytically estimated. The estimates are

derived in line with the maximum likelihood approach by Gimel’farb [2]. Statisti-

cal results highlight the robustness of the proposed analytical estimation approach

vi

Page 8:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

over conventional methods. Finally, the shape variations between an object and

its candidates are modeled using a new probabilistic model based on a Poisson

distribution.

The proposed models can be used to boost the performance of the known

pixel labeling techniques. In this connection, one of the frameworks proposed

in this dissertation is an unsupervised maximum-a-posteriori (MAP) framework

for labeling N-Dimensional (N-D) grayscale images. In this framework, the input

image and its labeling are modeled by a conventional joint Markov-Gibbs ran-

dom field (MGRF) of independent N-D signals and locally interdependent pixel

labels. To produce a good initial labeling, or pre-labeled image, each empirical

marginal distribution of signals is closely approximated by the proposed inten-

sity model. Then, the standard graph cuts approach based on large iterative α-

expansion moves in the label space is used to refine the initial labeling under the

MGRF model with analytically estimated potentials. Experimental results of syn-

thetic and real gray scale multimodal images clarify that without optimizing any

tuning parameters, the proposed approach is fast, robust to noise, and gives accu-

rate results compared to the state-of-the-art algorithms.

Due to the efficient and successful pairwise MRFs solvers in computer vi-

sion, pairwise MRF models are popular. However, pairwise MRFs can not model

the rich statistics that can be modeled with high order MRFs. Using the higher

order cliques could improve the image model. However, optimization algorithms

of these models have a too high time complexity to be practicable. This disserta-

tion proposes an efficient transformation that reduces higher order energies with

binary variables to quadratic ones. Therefore, the well established approaches that

have been successfully used to solve the pairwise energies can be used in solving

such higher order ones. The use of the proposed approach is demonstrated on

the segmentation problem of color images, and it shows encouraging results. The

proposed framework can be used to solve many other computer vision problems.

In order to account for image non-homogeneities outside the domain of

vii

Page 9:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

uniform spatial interaction assumed in the MGRF model, a new shape prior is

proposed. The prior is learned from a set of training shapes by estimating vari-

ations of the shapes. This process uses a new probabilistic distance model such

that the marginal distributions of an object and its background are approximated

each with a linear combination of the Poisson distribution and sign-alternate Gaus-

sians. First, an initial image is aligned with the training set using this distance

model. Then, a new energy function is built by combining the above object and

background appearance models with the probabilistic shape model. The optimal

labeling is obtained using the min-cut techniques to approximate the global min-

imum of the energy function. Experiments show that the use of the shape prior

improves considerably the accuracy of the graph cuts based image segmentation.

viii

Page 10:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

TABLE OF CONTENTS

DEDICATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiACKNOWLEDGMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viTABLE OF CONTENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixLIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiLIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivList of Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx

CHAPTERI. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

A. Image Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11. Intensity Model . . . . . . . . . . . . . . . . . . . . . . . . . . 32. Spatial Interaction Model . . . . . . . . . . . . . . . . . . . . 5

a. Gaussian random fields . . . . . . . . . . . . . . . . . . . 5b. Fractal Model . . . . . . . . . . . . . . . . . . . . . . . . . 6c. Fourier transform . . . . . . . . . . . . . . . . . . . . . . 6d. Markov Random Field . . . . . . . . . . . . . . . . . . . . 6

3. Shape Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

B. Image Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

C. Why This Work Is Needed . . . . . . . . . . . . . . . . . . . . . . . 10

D. Dissertation Contributions . . . . . . . . . . . . . . . . . . . . . . 10

E. Document Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . 11II. MARKOV-GIBBS RANDOM FIELD AND LABELING PROBLEM . . 12

A. Markov-Gibbs Random Field (MGRF) . . . . . . . . . . . . . . . . 121. Gibbs Random Fields . . . . . . . . . . . . . . . . . . . . . . 142. Markov Random Fields . . . . . . . . . . . . . . . . . . . . . 16

B. MGRF Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171. Auto-Models . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

C. MGRF-Based Image Labeling . . . . . . . . . . . . . . . . . . . . . 181. Maximum-A-Posteriori Estimation . . . . . . . . . . . . . . 18

D. Energy Minimization Techniques . . . . . . . . . . . . . . . . . . . 20

ix

Page 11:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

1. Simulated Annealing (SA) . . . . . . . . . . . . . . . . . . . 202. Iterated Conditional Modes (ICM) . . . . . . . . . . . . . . . 213. Max-Product Belief Propagation (BP) . . . . . . . . . . . . . 214. Tree-Reweighted Message Passing (TRW) . . . . . . . . . . 22

E. Graph Cuts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221. Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232. Min-Cut/Max-Flow problems . . . . . . . . . . . . . . . . . 233. Expansion Moves Algorithm . . . . . . . . . . . . . . . . . . 25

F. Extended Roof Duality . . . . . . . . . . . . . . . . . . . . . . . . . 251. Roof Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . 262. Probing Method . . . . . . . . . . . . . . . . . . . . . . . . . 27

III. MGRF PARAMETERS ESTIMATION . . . . . . . . . . . . . . . . . . . 29

A. Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301. Coding Estimation . . . . . . . . . . . . . . . . . . . . . . . . 302. Least Square Error method (LSQR) . . . . . . . . . . . . . . 313. Parameter Estimation Using Co-occurrence Probability . . . 314. Analytical method for Potts model . . . . . . . . . . . . . . . 325. Others . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

B. The Proposed Approach For Parameter Estimation . . . . . . . . 331. Pairwise Clique Potential Estimation . . . . . . . . . . . . . 342. Triple Clique Potential Estimation . . . . . . . . . . . . . . . 37

C. Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

D. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43IV. A NOVEL UNSUPERVISED GRAPH CUTS APPROACH FOR N-D

MULTIMODAL IMAGE LABELING . . . . . . . . . . . . . . . . . . . . 46

A. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

B. The Conditional Image Model . . . . . . . . . . . . . . . . . . . . 491. Dominant Modes Estimation . . . . . . . . . . . . . . . . . . 51

C. Graph Cuts-based Optimal Labeling . . . . . . . . . . . . . . . . . 59

D. Experiments and Discussion . . . . . . . . . . . . . . . . . . . . . 611. Ground Truth Experiments . . . . . . . . . . . . . . . . . . . 632. Real Images and Applications . . . . . . . . . . . . . . . . . 65

a. Lung Segmentation . . . . . . . . . . . . . . . . . . . . . 70

E. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75V. OPTIMIZING BINARY MRFs WITH HIGHER ORDER CLIQUES . . . 78

x

Page 12:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

A. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

B. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

C. Polynomial Forms of Clique Potentials . . . . . . . . . . . . . . . 811. Cliques Of Size Two . . . . . . . . . . . . . . . . . . . . . . . 812. Cliques Of Size k . . . . . . . . . . . . . . . . . . . . . . . . . 82

D. Energy Reduction - the Proposed Approach . . . . . . . . . . . . 851. Efficient Implementation . . . . . . . . . . . . . . . . . . . . 90

E. Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 92

F. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99VI. A NOVEL SHAPE REPRESENTATION AND APPLICATION FOR

IMAGE SEGMENTATION . . . . . . . . . . . . . . . . . . . . . . . . . 100

A. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

B. Proposed Segmentation Framework . . . . . . . . . . . . . . . . . 1021. Shape Model Construction . . . . . . . . . . . . . . . . . . . 1022. Distance Probabilistic Model . . . . . . . . . . . . . . . . . . 1053. Graph Cuts-based Optimal Segmentation . . . . . . . . . . . 105

C. Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

D. Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

E. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112VII. STEREO MATCHING-BASED HUMAN FACES RECONSTRUCTION 114

A. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

B. Stereo Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1171. Human Faces Reconstruction . . . . . . . . . . . . . . . . . . 119

VIII. CONCLUSION AND FUTURE WORK . . . . . . . . . . . . . . . . . . 123

A. Directions for Future Research . . . . . . . . . . . . . . . . . . . . 124REFERENCES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126APPENDIX

I. NOMENCLATURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

CURRICULUM VITAE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

A. CONTACT INFORMATION . . . . . . . . . . . . . . . . . . . . . . . 135

xi

Page 13:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

B. RESEARCH INTERESTS . . . . . . . . . . . . . . . . . . . . . . . . . 135

C. EDUCATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

D. HONORS AND AWARDS . . . . . . . . . . . . . . . . . . . . . . . . 135

E. PUBLICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

F. REVIEWER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

G. SOFTWARE PROGRAMMING . . . . . . . . . . . . . . . . . . . . . . 136

H. BIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

I. LANGUAGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

J. MEMBERSHIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

xii

Page 14:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

LIST OF TABLES

TABLE PAGE

1. Accuracy of the proposed parameter estimation method for binaryimages of size 128× 128 . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2. Accuracy of the proposed parameter estimation method for imagesof size 64× 64 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3. Accuracy of the proposed parameter estimation method for imagesof size 128× 128 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4. Accuracy of the proposed parameter estimation method for imagesof size 256× 256 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

5. Accuracy of the proposed parameter estimation method for imagesof size 512× 512 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

6. Graph Edge Weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

7. 1D Statistical results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

8. Accuracy and time performance of the proposed approach segmen-tation on 7 data sets in comparison to ICM and IT. Average volume256x256x77 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

9. Graph Edges Weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

10. Accuracy of the proposed approach segmentation on 33 slices incomparison to Graph Cut without shape and THreshold technique. 112

11. Stereo setup parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 120

xiii

Page 15:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

LIST OF FIGURES

FIGURE PAGE

1. Different image types (a) a single 2D video frame of a real 3D scene,(b) a remotely sensed image of the Earth’s surface, (c) a 2D slice of a3D computed tomography image, (d) a magnetic resonance image,(e) an ultrasound image, and (f) an X-ray image. . . . . . . . . . . . 2

2. A binary image of a Dalmatian dog in a background of leaves. Ob-servers combine the intensity and interaction information of the in-put image with the notion of what a Dalmatian looks like to cor-rectly recognize the dog. Courtesy of Cremers [3]. . . . . . . . . . . 8

3. Two examples of the first order neighbors for p and q. . . . . . . . . 13

4. The neighborhood systems up to five order for a pixel p. . . . . . . 14

5. The clique types of the second order neighborhood and their dif-ferent potential parameters . . . . . . . . . . . . . . . . . . . . . . . . 15

6. An example of undirected Graph: Image’s pixels (a-i) are the graph’snodes. n-links is constructed for 4-neighborhood system. t-linkconnect pixels with terminals. . . . . . . . . . . . . . . . . . . . . . . 24

7. Examples of cuts on a graph. (a), (b), and (c) are valid cuts. (d)is invalid cuts; it do not separate the terminal there exist a paths, a, d, e, h, t. (e) is invalid cuts; it has a subset (a, d), (b, e), (c, f)gives a valid cut. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

8. The probing method: (a) The output of roof duality with unlabeledsites. (b) and (c) the outputs of roof duality after fixing site p with0 and 1, respectively. It can be drawn that fq is always one, so itsoptimal label is 1. fr follows fp. Therefore, sites q and r can beeliminated by letting f ∗q = 1 and f ∗r = f ∗p . . . . . . . . . . . . . . . . 27

9. Besag’s scheme for coding sites (a) first order model and (b) secondorder model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

10. Different types of potential function . . . . . . . . . . . . . . . . . . . 33

xiv

Page 16:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

11. Samples of synthesized binary images of size 128× 128 . . . . . . . 38

12. Samples of synthesized images of size 64× 64 . . . . . . . . . . . . . 39

13. Samples of synthesized images of size 128× 128 . . . . . . . . . . . . 40

14. Samples of synthesized images of size 256× 256 . . . . . . . . . . . . 41

15. Samples of synthesized images of size 512× 512 . . . . . . . . . . . . 42

16. Results of the proposed method for estimating anisotropic Pottsmodel parameters for images of size 128×128: (a) [25.6(0.05) 0.003(0.08)0.001(0.08) 0.002(0.08)], (b) [0.003(0.09) 25.6(0.05) 0.004(0.09) 0.001(0.09)],(c) [0.001(0.08) 0.001(0.09) 0.005(0.09) 25.9(0.05)], and (d) [0.01(0.09)0.01(0.09) 25.9(0.05) 0.01(0.09)] . . . . . . . . . . . . . . . . . . . . . . 44

17. Samples of synthesized images with 32 colors and high order cliques(a) Sample of realizations generated with γ = 5 and (b) Sample ofrealizations generated with γ = 10 . . . . . . . . . . . . . . . . . . . 45

18. User seed for kidney slice CE-MR angiography. courtesy of Boykovand Jolly [4] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

19. The effect of the user input on the final labeled image. courtesy ofBoykov and Funka-Lea [5] . . . . . . . . . . . . . . . . . . . . . . . . 50

20. pAIC result for a bimodal synthetic image (a)Empirical and esti-mated densities, and the 2 Gaussian components. (b) The log like-lihood ( maximum at 2). . . . . . . . . . . . . . . . . . . . . . . . . . . 54

21. pAIC result for a 3-modal synthetic image (a)Empirical and esti-mated densities, and the 3 Gaussian components. (b) The log like-lihood ( maximum at 3). . . . . . . . . . . . . . . . . . . . . . . . . . . 54

22. pAIC result for a 4-modal synthetic image (a)Empirical and esti-mated densities, and the 4 Gaussian components. (b) The log like-lihood ( maximum at 4). . . . . . . . . . . . . . . . . . . . . . . . . . . 55

23. pAIC result for a 5-modal synthetic image (a)Empirical and esti-mated densities, and the 5 Gaussian components. (b) The log like-lihood ( maximum at 5). . . . . . . . . . . . . . . . . . . . . . . . . . . 55

xv

Page 17:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

24. Non Gaussian 3-class result: (a) and (b) show the output of thepAIC-EM algorithm, (c) shows the normalized absolute error be-tween the empirical and estimated densities, (d) shows the domi-nant component generated by pAIC-EM and the refining compo-nents, positives and negatives, generated by the modified EM al-gorithm, (e) shows the empirical and estimated densities, and (d)shows the marginal densities with the best thresholds. . . . . . . . . 56

25. Result for CT Lung slice: (a)The CT slice, (b) and (c) the outputof the pAIC-EM algorithm, (d) the dominant component generatedby pAIC-EM and the refining components, positives and negatives,generated by the modified EM algorithm, (e) the empirical and esti-mated densities, and (f) the marginal densities with the best thresh-old. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

26. Result for 3-class MRA slice: (a)The MRA slice, (b) and (c) the out-put of the pAIC-EM algorithm, (d) the dominant component gener-ated by pAIC-EM and the refining components, positives and neg-atives, generated by the modified EM algorithm, (e) the empiricaland estimated densities, and (f) the marginal densities with the bestthresholds. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

27. Proposed algorithm flowchart. . . . . . . . . . . . . . . . . . . . . . . 60

28. Effect of choosing γ (a) Original image (b) Noisy output γ = 0.05(c) Over smoothed output γ = 5 . . . . . . . . . . . . . . . . . . . . . 62

29. The changes of the relative error with γ, proposed adaptive analyt-ical approach values shown in red asterisk . . . . . . . . . . . . . . . 62

30. 1D image labeling: (a)The original signal is distorted by WGN (b)Distorted signal (c) MLE outputs (d) Proposed algorithm output. . . 64

31. Segmentation results (a) The gray levels image with SNR=1 dB, (b)Proposed Approach Segmentation.(c) The gray levels image withSNR=5 dB, (d) Proposed Approach Segmentation. (Error shown inred) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

32. Segmentation results: for the 3-modal image(a) EDISON output,(b) NCUTS output . For the 5-modal image(c) EDISON output, (d)NCUTS output. (Error shown in red) . . . . . . . . . . . . . . . . . . 67

33. The changes of the misclassification error with the SNR for 3-modaland 5-modal images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

34. The changes of the misclassification error with the SNR for 3-modaland 5-modal images . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

xvi

Page 18:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

35. 4-modal image segmentation results (b)ε = 0.16%, τ = 5 sec. (c)ε = 0.71%, τ = 13 sec.(errors shown in red) . . . . . . . . . . . . . . . 69

36. Example of a graph that used in volume labeling. Note: Terminalsshould be connected to all voxels but for illustration purposes, thiswas not done. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

37. Slices from the synthetic volume. . . . . . . . . . . . . . . . . . . . . 69

38. Synthetic volume segmentation results (a) 3D segmentation (b) 2Dsegmentation (Error shown in green). . . . . . . . . . . . . . . . . . . 70

39. Real image segmentation results of (a) Proposed Algorithm (b) EDI-SON (hs=15,hc=9,M=5000.) (c) NCUT (nsg=2, and the default pa-rameters). (Courtesy of Shi and Malik [6]) . . . . . . . . . . . . . . . 71

40. Starfish segmentation results of (a)Proposed Algorithm, (b) EDI-SON (hs=2, hc=10, M=500.), (c) NCUT (nsg=2, of=0.04), (d)ProposedAlgorithm (e) EDISON (hs=7, hc=6.6, M=5000.)(f) NCUT (nsg=2,of=0.04, th=0.03, eem=5, et=0.1) . . . . . . . . . . . . . . . . . . . . . 71

41. Kidney segmentation results of (a) Proposed Algorithm ε=1.7%(b)EDISON (hs=5, hc=6.1, M=9000.) ε=1.1% (c) NCUT (nsg=2, of=0.01,th=0.04, et=0.01) ε=1.1% (d) Proposed Algorithm ε=0.1%(e) EDI-SON (hs=7, hc=6.6, M=9000.) ε=0.01% (f) NCUT (nsg=2, and thedefault parameters) ε=0.6% . . . . . . . . . . . . . . . . . . . . . . . . 72

42. Ling Phantom Segmentation: (a) Phantom gray scale image, (b) theproposed algorithm output, (c) ICM output, (d) IT output. (Themisclassified pixels are shown in red color) . . . . . . . . . . . . . . . 73

43. Segmentation Results of: proposed algorithm (1st column) , ICM(2nd column), and IT (last column). (The misclassified pixels areshown in red color) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

44. Proposed algorithm’s results. Left: segmented lung volumes (Er-ror are shown in green). Right: samples from volume’s segmentedslices (Error are shown in red). Segmentation errors (a) 2.08%, (b)2.21%, (c) 2.17%, and (d) 1.95% . . . . . . . . . . . . . . . . . . . . . . 76

45. Examples of segmented lung slices that have nodules (bounded byyellow circle). Left: IT and middle: ICM approaches misclassifiedthese parts as chest tissues (error is shown in red). However, right:The proposed algorithm correctly classified them as a lung. . . . . . 77

xvii

Page 19:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

46. Part of an image lattice for 2nd order neighborhood system andcliques of size three . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

47. Starfish segmentation results. (a) the pairwise cliques results, and(b): the higher order cliques results. . . . . . . . . . . . . . . . . . . . 93

48. Starfish with occlusions segmentation results. (a,c): the pairwisecliques results, and (b,d): the higher order cliques results. . . . . . . 94

49. Starfish with occlusions more segmentation results. (a,c): the pair-wise cliques results, and (b,d): the higher order cliques results. . . . 95

50. More segmentation results. (a,c): the pairwise cliques results (Num-bers in images refer to the regions with inhomogeneities), and (b,d):the higher order cliques results. . . . . . . . . . . . . . . . . . . . . . 96

51. More segmentation results. (a,c,e): the pairwise cliques results (Num-bers in images refer to the regions with inhomogeneities), and (b,d,f):the higher order cliques results. . . . . . . . . . . . . . . . . . . . . . 97

52. More segmentation results for partially occluded objects. (a,c): thepairwise cliques results (Numbers in images refer to the regionsrepresent artificial occlusion), and (b,d): the higher order cliquesresults. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

53. Samples of kidney training data images: (a) original , (b) Segmented, (c) Aligned (d) Binary . . . . . . . . . . . . . . . . . . . . . . . . . . 103

54. (a) The labelled image, (b) The iso-contours . . . . . . . . . . . . . . 104

55. (a,b)Empirical densities and the estimated Poisson distributions,(c,d) Components of distance probabilistic models, (e,f) Final es-timated densities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

56. Gray level probabilistic model for the given image (a) Original im-age (b) aligned image (c) Initial density estimation (d) LCG com-ponents (e) Final density estimation (f) Marginal densities. Seg-mented Kidney (g)Results of gray level threshold 102.6%(h) Re-sults of Graph cuts without shape constraints 41.9% (i) Proposedapproach results 2.5% . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

57. More segmentation results (a)Original images (b) Results of Graphcuts without shape constraints (c) Proposed approach results . . . . 110

58. More segmentation results (a)Original images (b) Results of Graphcuts without shape constraints (c) Proposed approach results . . . . 111

xviii

Page 20:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

59. Kidney Phantom (a) The phantom (b) Results of Graph cuts with-out shape constraints (c) Proposed approach results . . . . . . . . . . 113

60. Image matching results. Left relative positions before matching.Right matching results. . . . . . . . . . . . . . . . . . . . . . . . . . . 116

61. General stereo pair setup. The relation between the depth and thedisparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

62. Rectified stereo pair setup. The depth is inversely proportional tothe disparity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

63. Example of the depth map: (a) one of the image pair and (b) thecorresponding depth map. . . . . . . . . . . . . . . . . . . . . . . . . 118

64. The system setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

65. Reconstruction results. (a) The stereo pair, (b) left the depth mapand right the reconstructed shape. . . . . . . . . . . . . . . . . . . . . 121

66. More reconstruction results. Left: One of the stereo pair. Middle:Frontal view from the reconstructed shape. Right: Side view fromthe reconstructed face . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

xix

Page 21:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

List of Algorithms

ALGORITHM PAGE1 Iterated Conditional Modes (ICM) [7] . . . . . . . . . . . . . . . . . . 212 α-Expansion Move Algorithm [8] . . . . . . . . . . . . . . . . . . . . . 253 Extended Roof Duality Algorithm (Probing Method) [9] . . . . . . . . 284 Gibbs Sampler Algorithm [10] . . . . . . . . . . . . . . . . . . . . . . . 385 pAIC-EM Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Generating Coefficients of Size k Clique’s Energy Algorithm . . . . . 837 Transform to Quadratic . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

xx

Page 22:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

CHAPTER I

INTRODUCTION

An image is a graphic representation of a scene in two or three dimensional

spaces. An image is stored as a raster data set of integer values that represent the

intensity of reflected light, heat, or other range of values on the electromagnetic

spectrum. Common examples include photographs which are a digitization of

two dimensional projections of three dimensional scenes, remotely sensed images

(e.g., satellite data), and scanned data (see Fig. 1 (a,b)). Other examples include

medical images, where image intensities represent radiation absorption in X-ray

imaging and Computed Tomography (CT), acoustic pressure in ultrasound, or ra-

dio frequency (RF) signal amplitude in Magnetic Resonance Imaging (MRI) (see

Fig. 1(c-f)). Images may be acquired in a continuous domain or in a discrete space.

For two dimensional discrete images, the location of each measurement is a “im-

age element” called pixel (in three dimensional space, it is called a voxel), and each

object or class in the image is represented by a group of pixels.

A. Image Modeling

The goal of image modeling is to quantitatively specify visual characteristics

of the image in few parameters so as to understand natural constraints and general

assumptions about the physical world and the imaging process [11]. Stochastic ap-

proaches, particularly random field models, have proved useful in modeling real

images that have large varieties from one to another. These models have been used

in image processing algorithms such as segmentation, enhancement and restora-

tion [12]. Random field models can represent prior information of an image so

that the powerful Bayesian decision theory can be applied to solve these prob-

1

Page 23:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b)

(c) (d)

(e) (f)

FIGURE 1 – Different image types (a) a single 2D video frame of a real 3D scene,(b) a remotely sensed image of the Earth’s surface, (c) a 2D slice of a 3D computedtomography image, (d) a magnetic resonance image, (e) an ultrasound image, and(f) an X-ray image.

2

Page 24:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

lems. Objects-of-interest in the images are characterized by geometric shapes and

visual appearance, although it is very difficult to formally define these notions. In

this dissertation, the visual appearance is characterized by marginal probability

distribution of pixel or voxel intensities and by spatial interaction between pixels

or voxels in each object. The shape is characterized by typical object boundaries on

the images. One of the main objectives of this dissertation is to develop more ac-

curate mathematical models of these characteristics than the known models. This

section presents an overview for existing image modelling approaches.

1. Intensity Model

This model estimates the marginal density for each class in the given image

from the mixed normalized histogram of the occurrences of the gray levels within

that image. Density estimation has thus been heavily studied, under two primary

umbrellas: parametric and nonparametric methods. Nonparametric methods take

a strong stance of letting the data represent themselves. Nonparametric methods

(e.g., Parzen window [13]) achieve good estimation for any input distribution as

more data are observed. However, these methods have many parameters that need

to be tuned [14]. One of the core methods that nonparametric density estimation

approaches based on is the k-nearest neighbors (k-NN) method. These approaches

calculate the probability of a sample by combining the memorized responses for

the k nearest neighbors of this sample in the training data. In these estimators

(e.g., the Parzen density estimator [13]), it is noticed that the amount of computa-

tion is directly related to the number of training samples. In order to reduce the

computation, Fukunaga and Hayes [15] extracted a representative subset from the

training data. This subset was chosen such that the Parzen density estimation gen-

erated with this reduced subset is very close to the one that generated with the

full data set in the sense of an entropy measure of similarity between the two esti-

mates. Silverman [16] proposed a kernel density estimator using the Fast Fourier

3

Page 25:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

Transform (FFT). He estimated density using univariate Parzen window on reg-

ular grids. This method exploits the properties of the FFT, where the FFT of the

density estimate is the product of the FFTs of the kernel function and the data.

However, this algorithm can not be used in the general cases of density estimates.

In order to reduce the number of kernel evaluations, Jeon and Landgrebe [17] pro-

posed a simple branch-and-bound procedure that is applied to the Parzen density

estimation. Girolami and He [18] proposed a Parzen window-based density esti-

mator which employs condensed data samples. The advantage of nonparametric

methods is the flexibility: they can fit almost any data well. No prior knowledge

is required. However, they apparently often have a high computational cost. Also,

there is no opportunity to incorporate prior knowledge.

On the other hand, parametric methods are useful when the underlying

distribution is known in advance or is simple enough to be modeled by a simple

distribution function or a mixture of such functions. The parametric model is very

compact (low memory and CPU usage) where only few parameters need to fit. Pa-

rameters of a mixture are typically estimated using the Expectation-Maximization

(EM) algorithms that converge to the maximum likelihood estimates of the mix-

ture weights (prior probabilities of the mixture components) and parameters of

each component [19]. Since Laird et al, [20] extended the EM algorithm to be used

in estimating parameters from an incomplete data set, EM became a popular ap-

proach in density estimation and many versions of EM are introduced [19]. Re-

cently, Farag et al. [1, 21] used a Linear Combination of Gaussians (LCG) with pos-

itive and negative components to estimate the marginal density of each class in a

given image. They developed a modified EM algorithm to estimate the parameters

of this LCG model.

The limitation of all the aforementioned approaches is that all of them de-

pend on using samples of training data to estimate the marginal density for each

class in the given image. In this dissertation, the major objective is to present an

unsupervised approach to estimate the marginal density for each class from the

4

Page 26:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

mixed normalized histogram of occurrences of the gray levels. Similar to the work

of Farag et al. [1, 21], the marginal intensity distribution is modeled by a linear

combination of Gaussians (LCG) with sign-alternate components. Parameters of

the mixtures are estimated by using the modified EM algorithm. However, in this

algorithm the number of classes and initial parameters of their dominant modes

are set manually. In this dissertation, both the number of the classes and the ini-

tial mixture parameters are estimated from the given empirical distribution using

a new technique described in section IV.B.1.

2. Spatial Interaction Model

Spatial interaction helps in eliminating possible ambiguities, correcting er-

rors, and recovering missing information in the image labeling problem. Spatial

interaction models describe the relation between the pixels in an image mathemat-

ically. To do this, the image is realized as a stochastic process on a random field.

This random field is a joint distribution imposed on a set of random variables rep-

resenting pixel intensities that imposes statistical dependence in a spatially mean-

ingful way. Random field models provide a good tool for blending information

about local spatial interaction into a global framework. The literature is rich with

models that are used in describing the spatial interaction of image pixels. Each

model has its own representation for the relationship between local sites in the

random field. This section tries to give a brief review for the popular models.

Many random field models have been discussed in Dubes and Jain’s work [11].

Indeed, the authors of [11] provided a taxonomy of these various models.

a. Gaussian random fields Gaussian random fields are special models

that take advantage of the mathematical properties of the gaussian distribution.

The Gaussian model requires all interactions between pixels to be Gaussian dis-

tributed, and it only allows multiple pairwise interactions to occur. The most pop-

ular Gaussian models in image analysis are Simultaneous Auto-Regressive (SAR)

5

Page 27:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

and Auto-Regressive moving average [11].

b. Fractal Model Fractals are useful in modeling images of natural sur-

faces (e.g., clouds, leaves, rivers, ... etc.) that have a statistical quality of roughness

and self-similarity at different scales. For a review about fractal models see [22].

Pentland [23] used the fractal models for image segmentation. Also, Garding [24]

used these models in textures segmentation.

c. Fourier transform This model is used in textured images classifica-

tion. Fourier transform mimics the human visual system by extracting the differ-

ent frequency components and analyzing the image in the frequency domain. To

segment a variety of natural and synthetic textures, Coggins and Jain [25] used a

set of frequency and orientation selective filters in a multiband filtering approach.

Smith [26] used a set of band pass filters followed by zero crossing detection to

successfully generate a tree classifier of textures.

d. Markov Random Field More commonly used models in image analy-

sis are Markov Random Field (MRF) models. These models capture the local spa-

tial textural information in an image by assuming that the pixel intensity depends

on the intensities of the neighboring pixels. MRF models are among the most suc-

cessful models that are used to represent visual contextual information in labeling

problems [27]. Moreover, MRF models that have exponential priors belong to the

class of Gibbs models. While a MRF is defined in terms of local properties, a Gibbs

Random Field (GRF) describes the global properties of an image in terms of joint

distribution of intensity for all pixels [28]. This class of MRF that has exponential

priors, known as the Markov-Gibbs Random Field (MGRF), has been extensively

used in image modeling [11, 28]. Gimel’farb [29] proposed a MGRF model that

takes into account multiple pairwise pixel interactions. This model is used for im-

ages (textures) that are spatially uniform. Such an image is classified by its gray

level difference histogram. The author proposed an algorithm based on the max-

imum likelihood approach to estimate the model parameters. Compared to other

models, the author claims that the parameters for his model are larger in number

6

Page 28:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

but simpler to estimate. Zhu et al, [30] proposed the Filters, Random Fields, and

Maximum Entropy (FRAME) model. This model integrates filtering theory and

MRF modeling using the maximum entropy principle. To model a set of texture

images, they extracted texture features by applying a set of filters to the observed

images. Then, the marginal distributions of the images are estimated from the his-

tograms of the filtered images. After that, they fit a distribution for this texture

from the marginal distributions using a maximum entropy-based method. The

authors claim that this model is more descriptive than conventional MRF models.

However, it is computationally very expensive. Although MGRF is a very good

tool to model an image, its parameters estimation is still a challenge. Several stud-

ies [2, 28, 31–33] have been proposed in the literature to estimate these parameters.

Some of these methods are discussed in chapter III.

Farag et al., [1] proposed an analytical approach to estimate the parame-

ter of a specific MGRF model ( the homogenous isotropic Potts model governing

symmetric pairwise co-occurrences of labels). In this dissertation a similar analyt-

ical approach to estimate the parameter of another specific MGRF model ( the ho-

mogenous isotropic Potts model governing asymmetric pairwise co-occurrences

of labels), is proposed. This is disscussed in detail in chapter III.

A MGRF model is specified by the set of clique potentials. Most of the afore-

mentioned approaches focused on unary and pairwise cliques. However, this rep-

resentation can not model the rich statistics of natural scenes [34]. Such rich statis-

tics can be modelled using higher order clique potentials. However, due to com-

putational expense of the optimization algorithms of higher order MGRF models,

their use has been quite limited. This dissertation proposes a new efficient algo-

rithm that will transform a higher order binary energy to a quadratic one, and so

it can be used in practice. This work is presented in chapter V.

7

Page 29:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 2 – A binary image of a Dalmatian dog in a background of leaves. Ob-servers combine the intensity and interaction information of the input image withthe notion of what a Dalmatian looks like to correctly recognize the dog. Courtesyof Cremers [3].

3. Shape Model

In many cases, an image has misleading information (intensity and inter-

action models). However, the human brain tends to still capture the visual char-

acteristics of the given image. An example of this case is shown in Fig. 2. This

example is for a Dalmatian dog in an environment of fallen leaves and grass and

due to coarse-graining and binarization, the dog cannot be distinguished from the

background based on the texture only. However, human observers combine the in-

tensity and the interaction information of the input image with the notion of what

a Dalmatian looks like to correctly recognize the dog. Thus, a shape model pro-

vides useful information that balances the missing low level information in cases

such as poor image resolution, diffused boundaries , noise, or occlusion.

The literature contains many shape modeling approaches such as the one

proposed by Leventon et al. [35]. This approach combines the shape and de-

formable model by attracting the level set function to the likely shapes from a train-

ing set specified by Principal Component Analysis (PCA). To make the shape guide

8

Page 30:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

the segmentation process, Chen et al. [36] defined an energy functional which ba-

sically minimizes an Euclidean distance between a given point and its shape prior.

Huang et al. [37] combine registration with segmentation in an energy minimiza-

tion problem. The evolving curve is registered iteratively with a shape model us-

ing the level sets. They minimized a certain function in order to estimate the trans-

formation parameters. In [38], shapes are represented with a linear combination

of 2D distance maps where the weight estimates maximize the distance between

the mean gray values inside and outside the shape. In Paragios’s work [39], a

shape prior and its variance obtained from training data are used to define a Gaus-

sian distribution, which is then used in the external energy component of a level

sets framework. One of the main limitations in these approaches is that they did

not model the shape variations and so they cannot handle shapes with large de-

formation. In this dissertation, a new shape model is proposed to overcome this

limitation, as shown in chapter VI.

B. Image Labeling

The image labeling problem is specified in terms of a set of sites (e.g., im-

age pixels, segments, ... etc.) and a set of labels ( e.g., pixel color, texure type, ...

etc.). The objective of labeling algorithms is to assign the true label for each site.

This problem can be formulated in a Bayesian framework using Markov random

fields [27]. In this framework, the task is to find the Maximum-A-Posteriori (MAP)

estimate of the underlying quantity. This dissertation focuses on the image pix-

els as sites, and gray levels as labels. This problem will be discussed in detail in

chapter II. The MAP-based methods choose the estimated labeled image that max-

imizes the posterior probability of the labeled image given the observed image.

Many optimization techniques proposed in the literature use stochastic models to

solve the labeling problem. A review for some of these optimization approaches is

introduced in Sec.II.D.

9

Page 31:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

C. Why This Work Is Needed

The objective of this work is to use these models (intensity, spatial inter-

action, and shape) to interface with and boost the performance of existing image

labeling problem. The image labeling can be used as a formulation for diverse

computer vision and image processing applications, such as image segmentation,

image restoration, image matching, and stereo. Image segmentation is an impor-

tant preliminary step in many real world problems such as computer added diag-

nosis, object recognition, shape analysis. Thus, one of the application sections of

this dissertation is dedicated to image segmentation.

D. Dissertation Contributions

This work addresses the image modeling and image analysis, especially the

labeling problem for gray scale and color images. The objective of this work is

to find accurate mathematical models (intensity, spatial interaction, and shape)

that describe all possible information in the image. The main contributions in this

dissertation can be categorized into different types:

- Intensity Model:

1. The number of classes in the given multimodal image is determined by using

a new technique based on maximizing a new joint likelihood function.

- Spatial interaction Model:

1. A new analytical approach to estimate the parameter of a specific MGRF

model ( the homogenous isotropic Potts model governing asymmetric pair-

wise co-occurrences of labels) is presented.

2. A new efficient algorithm that will transform a higher order energy to a

quadratic one is proposed, and so it can be used in practice.

- Shape Model:

10

Page 32:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

1. The shape variations are estimated using a new probabilistic model.

- Algorithms:

1. Unlike previous graph cuts based segmentation techniques, no user interac-

tion is needed; instead, a new unsupervised MAP-based labeling framework

of N-D multimodal gray scale images is proposed.

E. Document Layout

This document is presented in seven chapters. The following remarks sum-

marize the scope of each chapter.

Chapter II discusses the problem formulation of the image labeling in terms

of the model that is used to represent the image, the technique that is used to find

the Maximum-A-Posteriori (MAP) estimate.

Chapter III proposes an analytical method to estimate the parameter of ho-

mogeneous isotropic Potts model for an asymmetric Gibbs potential function.

Chapter IV presents a novel unsupervised graph cuts approach for N-D

multimodal image labeling (image segmentation and image restoration).

Chapter V proposes an efficient transformation that reduces a higher order

energy with binary variables to a quadratic one. The use of the proposed method

is demonstrated on the segmentation problem of color images, and it shows en-

couraging results.

Chapter VI presents a novel shape representation and application for image

segmentation.

Chapter VII introduces experiments for human faces reconstruction based

on a stereo matching technique as an application for image labeling.

Chapter VIII summarizes the main component of the proposed work and

presents a plan for future work.

11

Page 33:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

CHAPTER II

MARKOV-GIBBS RANDOM FIELD AND LABELING PROBLEM

The labeling problem gives common notation for many diverse vision and

image processing problems such as stereo, image restoration, image matching, and

image segmentation. The image labeling problem is specified in terms of a set of

sites (e.g., image pixels, edges, segments, ...etc.) and a set of labels ( e.g., pixel

color, texure type, ...etc.). The objective of labeling algorithms is to assign the true

label for each site. This problem can be formulated in a Bayesian framework using

Markov random fields [27]. In this framework the task is to find the Maximum-

A-Posteriori (MAP) estimate of the underlying quantity. This chapter is dedicated

to the discussion of problem formulation and solving tools. Sec. II.A introduces

the Markov-Gibbs Random Field (MGRF) and its properties. Sec. II.B presents

common MGRF models that have been used in image modelling. The MGRF-

based formulation of image labeling is given in Sec. II.C.1. Different techniques

that have been used to solve labeling problem are explored in sections II.D, II.E,

and II.F.

A. Markov-Gibbs Random Field (MGRF)

A random field is defined as a triplet consisting of: a sample space, a class of

Borel sets on the sample space, and a probability measure P whose domain is the

class of Borel sets [11]. A random field model is a specification of P for a particular

class of random variables, such as intensities at an image pixels. For an observed

image, a stochastic model can be constructed as follows. Let P = 1, 2, . . . , n be

a set of n sites that represents the set of image pixels. Let G = 0, . . . , Q − 1 and

L = 0, . . . , K − 1 denote the set of gray levels and region labels, respectively,

12

Page 34:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 3 – Two examples of the first order neighbors for p and q.

where, Q is the number of gray levels, and K is the number of labels.

Definition 1. A digital image is defined by a function I : P → G that maps the sites onto

the set of signal values.

Definition 2. A labeled image is defined by a function f : P → L that maps the sites onto

the set of labels

Let F = F1, F2, . . . , Fn be a set of random variables defined on P . Hence,

f = f1, f2, . . . , fn is defined as a configuration of the field F . Denote by F the set

of all labelings Ln.

Since the image has a natural structure that is 2D array, this helps to define

a geometric neighborhood system N consisting of a set of all neighboring pairs

p, q where p, q ∈ P . The most popular neighborhood systems in image analysis

is the first order neighbors where the four nearest neighbors sharing a side with

the given pixel. Fig. 3 shows an example of this neighborhood system where the

neighborhood of p is Np = a, b, c, d, and the neighborhood of q is Nq = x, y.

The symmetric neighborhood Np satisfies the following properties:

1. p 6∈ Np

2. if p ∈ Nq then q ∈ Np.

Fig. 4 illustrates the neighborhood systems up to five order for a pixel p.

13

Page 35:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 4 – The neighborhood systems up to five order for a pixel p.

1. Gibbs Random Fields

In 1901, Gibbs used Boltzmann’s distribution of energy states in molecules

to express the probability of a whole system with many degrees of freedom being

in a state with a certain energy [27]. A Gibbs random field (GRF) describes the

properties of an image in terms of the joint distribution of pixels labels. A discrete

Gibbs random field provides a global model for an image by specifying a proba-

bility mass function in the following form:

P (f) =1

Zexp(−U(f)/T ), (1)

where Z is a normalizing constant called the partition function, T is a control pa-

rameter called temperature. The term U(f) denotes the Gibbs energy [28], and is

given by:

U(f) =∑c∈ C

Vc(f), (2)

where Vc is known as the potential function, or the clique function and C is the set

of all cliques. A clique is defined as [27]:

Definition 3. A clique is a set of sites (e.g., pixels in an image) in which all pairs of sites

are mutual neighbors.

Fig. 5 illustrates the clique types of the second order neighborhood system.

14

Page 36:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 5 – The clique types of the second order neighborhood and their differentpotential parameters

15

Page 37:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

2. Markov Random Fields

Markov Random Fields (MRF) were introduced to image analysis by Hass-

ner and Sklansky [40]. A Gibbs random field describes the global properties of an

image in terms of the joint distributions of pixels labels, whereas, a Markov ran-

dom field is defined in terms of local properties. Markov random fields provide a

convenient prior for modeling spatial interactions between image pixels.

Definition 4. The random field F , with respect to a neighborhood system N , is a discrete

Markov random field if its probability mass function P (F = f) satisfies the following

properties:

1. P (F = f) > 0 for all f ∈ F , (Positivity)

2. P (Fp = fp|FP−p = fP−p) = P (Fp = fp|FNp = fNp), (Markov Property)

3. P (Fp = fp|FNp = fNp) is the same for all sites p, (Homogeneity)

where P − p denotes set difference and fNp denotes all labels of pixels in Np.

MRF probability mass function P (F = f) will be abbreviated as P (f). The

Markov property states that a pixel label depends directly only on its neighbors.

This property establishes a local model.

Theorem 1. Hammersley-Clifford [28]: Under the positivity condition, the probability

distribution P (f) for an MRF can be represented as the GPD of Eq. (1) with potential

function supported by cliques C of the neighborhood graph describing the neighborhood

system N .

This theorem provides a convenient way to specify MRF, where a unique

GRF exists for every MRF and vice versa as long as the GRF is defined in terms of

cliques on a neighborhood system.

16

Page 38:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

B. MGRF Models

Geman and Geman [27] introduced the MGRF model to engineers as a pow-

erful tool for image modeling. MGRF models have been successfully used in tex-

ture and general image analysis and synthesis (For details see [2] and references

therein). The literature is rich with MGRF models, each of which tries to select the

potential functions that are suitable for a specific system behavior. Here, a simple

review of the most popular and related discrete models is given.

1. Auto-Models

The Gibbs energy can be defined by specifying interactions between sites

in the image. In most of the image processing and computer vision literature,

the Gibbs energy has been defined in terms of the “single-site” potentials and the

“two-site” potentials. This is called the pairwise interaction models. As Picard

described in [41], the single-site potentials, also called the “external field”, allows

one to impose structure on a pattern from an outside source. The two-site poten-

tial, also called a “bonding parameter”, influences the “attraction” or “repulsion”

between neighboring pairs of pixels in the image. The different models corre-

sponding to this form of the energy are typically called “auto-models”. Besag [28]

formulated the energy function of these models as follows:

U(f) =∑p∈P

Vp(fp) +

∑q∈Np

Vpq(fp, fq)

, (3)

where Vp(.) is the potential function for single-pixel cliques, and Vpq(., .) is the po-

tential function for all cliques of size 2, with Vpq(fp, fq) = Vpq(fq, fp) and Vpq(fp, fp) =

0. In the homogeneous models (site independent) Vp(.) is represented by V (.) and

Vpq(., .) is represented by Vq(., .) (i.e., Vq(., .) depends on the orientation of the neigh-

bor as show in Fig. 5). In the homogeneous isotropic models Vpq(., .) is represented

by V (., .).

An example of a Gibbs model having an energy function of this form is the

17

Page 39:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

homogeneous auto-binomial model used by Cross and Jain [42] where

V (fp) = γ0fp − ln

(K!

fp!(K − fp)!

),

Vq(fp, fq) = γqfpfq, (4)

where γ0 controls the influence of the external field, and γq influences the interac-

tion between neighboring pairs. γq is called the pairwise potential (e.g., γ1, γ2, γ3, γ4)

that depends on the orientation of site q relative to its neighbor p as show in Fig. 5.

In the isotropic models, γq = γ. Derin-Elliott’ model [31] can also be expressed in

this framework as follows

V (fp) = γ0fp,

Vq(fp, fq) =

γq if fp = fq,

−γq otherwise.. (5)

One of the most popular models in computer vision is the homogeneous Potts

model [27]. The Potts model is similar to the Derin-Elliott model, but γ0 = 0. A

similar type of the latter model is used in this dissertation.

C. MGRF-Based Image Labeling

In image labeling problems, one tries to recover a number of hidden vari-

ables (e.g., image labels) based on observable variables (e.g., image gray levels).

In this problem, MGRF models fit within the Bayesian framework of Maximum-

A-Posteriori (MAP) estimation, where the objective is to estimate the labeling that

solves the maximization problem.

1. Maximum-A-Posteriori Estimation

Since the field F is not observable, its realization f (the desired map) is esti-

mated based on the observation I (the input image). The common way to estimate

an MRF is MAP estimation [27]. Following the conventional approaches the input

18

Page 40:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

image and the desired map (labeled image) are described by a joint MGRF model

of independent image signals and interdependent region labels. A two-level prob-

ability model of the input image and its desired map is given by a joint distribution

P (I, f) = P (f)P (I|f) where P (I|f) is a conditional distribution of the original image

given the map, and P (f) is an unconditional probability distribution of the map.

Note that when the given data is too noisy, the dependence of the data I on the de-

sired values f is week i.e., P (I|f) ≈ P (I) [43]. The Bayesian maximum-a-posteriori

estimate of the map f , given the image I is expressed as:

f∗ = arg maxf∈F

P (I|f)P (f). (6)

In order to assure that the posterior distribution P (I|f) is a MRF model, this

requires conditional independence of the observed random variables

I = I1, I2, · · · , In. One way to get this is to assume the noise at each pixel is

independent. Therefore

P (I|f) =∏p∈P

P (Ip | fp). (7)

By replacing P (I|f) and P (f) in Eq. (6) using their expressions from equations (1),

(2), and (7), and after simple algebraic manipulations, the following expression is

obtained

f∗ = arg maxf∈F

exp( ∑

p∈Plog(P (Ip | fp))−

∑c∈C

Vc(f)

T

). (8)

Since the temperature T is constant for the given image, it can be removed from the

expression and its value implicity estimated with the potential parameter. In order

to have a complete description for the MGRF model, one should specify the clique

potential function. By choosing the MGRF model to be the pairwise homogeneous

Potts model described in Sec.II.B, Eq. (8) can be rewritten as follows:

f∗ = arg maxf∈F

exp( ∑

p∈Plog(P (Ip | fp))−

p,q∈NV (fp, fq)

). (9)

Unfortunately, this problem has no analytical solution. However, maximizing the

19

Page 41:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

likelihood in Eq. (9) is equivalent to minimizing the following energy function:

E(f) =∑

p,q∈NV (fp, fq) +

∑p∈P

D(fp), (10)

where D(fp) = − log(P (Ip | fp)) which usually called the data penalty term.

D. Energy Minimization Techniques

To solve the MAP estimation problem Eq. (10) many approaches were pro-

posed. Classical iterative search algorithms can be either stochastic (e.g., simu-

lated annealing [27]) or deterministic (e.g., iterated conditional modes [7]). Re-

cently, new graph-based algorithms such as Graph Cuts [44, 45], Tree-ReWeighted

message passing (TRW) [46], Belief Propagation (BP) [47], and roof duality [48]

have proven to be very powerful in minimizing such energy Eq. (10). Many stud-

ies (e.g., [9, 49]) in the literature have been done to investigate the performance

of these algorithms in solving different computer vision problems. These studies

show that modern energy minimization methods are much superior than classical

methods. In this section and the following two sections some of these algorithms

are summarized.

1. Simulated Annealing (SA)

The objective of this algorithm is to find MAP estimates of all labels simulta-

neously. Simulated annealing algorithm is based on the Metropolis approach [50]

and it has been popularized by Geman and Geman [27], who used SA to solve the

image labeling problem. The idea is to sample from Gibbs distribution with energyU(f)

T, where the temperature parameter T is slowly decreased to 0. With certain

temperature schedules, annealing can be guaranteed to find the global solution in

the limit [27]. However, the schedules that lead to this global need potentially long

runtime [8], and so sub-optimal schedules are used in practice. In this case, the

algorithm is not expected to find the global solution.

20

Page 42:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

2. Iterated Conditional Modes (ICM)

The ICM algorithm was proposed by Besag [7] to compute the MAP esti-

mate in a computationally simple manner that is faster than the simulated anneal-

ing. However, it is a local energy optimization technique. The algorithm is very

sensitive to the initial labeling. Choosing the prior MRF model is a critical step in

this algorithm. An outline of the ICM is described in Algorithm 1.

Algorithm 1 Iterated Conditional Modes (ICM) [7]1: Choose a MGRF model for P (f).

2: Select labeling f that maximizes P (I|f)3: while i < Niter do

4: for all p ∈ P do

5: Update fp by the value of fp that maximizes P (IP |fP )P (fp|fNp)

6: end for

7: increase i.

8: end while

3. Max-Product Belief Propagation (BP)

The BP algorithm approximately minimizes energies such as Eq. (10). It

gives an exact minimization if the graph of the energy is a tree. The key idea of the

BP can be described as follows. It passes messages around the graph defined by

the four-connected image grid. Defined by mipq is the message that a node p sends

to a neighboring node q at an iteration i. Each message is a vector of dimension

|L|. All messages are initialized to zero, and at each iteration they are updated as

follows:

mipq(fq) = min

fp

(V (fp, fq) + D(fp) +∑

r∈Np−qmi−1

rp (fp)). (11)

The algorithm keeps passing messages for edges until all messages become valid

(i.e., the convergence). The message is said to be valid if the updating process Eq.

21

Page 43:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(11) does not change it (or it is changed by a constant independent of fq). After

that a belief vector is computed for each node as

b(fq) = D(fq) +∑r∈Nq

mNiterrq (fq), (12)

where Niter is the number of iterations. Finally, the optimal label at each node is

selected such that it minimizes each belief individually. The BP can solve a more

general class of functions than graph cuts, but it has some drawbacks. It diverges

in the case of graphs that have loops, such cases exist in many computer vision

problems. Also, it gives solutions with higher energy than graph cuts [49].

4. Tree-Reweighted Message Passing (TRW)

The TRW is a message passing algorithm similar to the BP algorithm. How-

ever, the message update rule is different as follows:

mipq(fq) = min

fp

(V (fp, fq) + Apq(D(fp) +∑r∈Np

mi−1rp (fp))−mi−1

qp (fp)). (13)

The coefficients Apq are estimated as shown in [49] as follows. The image grid is

subdivided into a set of trees such that each edge is in at least one tree. Apq is

the probability that a tree, which is chosen randomly under certain distribution,

contains the edge (p, q) given that it contains p. Note that if Apq = 1 Eq. (13)

would be identical to Eq. (11). One of the advantages of the TRW algorithm is

that it computes a lower bound on the energy. Although the original TRW does

not guarantee the increasing of the lower bound with time, Sequential TRW (TRW-

S) proposed in [46] guarantees that the lower bound estimate is not to decrease

(convergence properties). TRW-S is guaranteed to give the same performance of

the roof duality, but it is much slower [9].

E. Graph Cuts

The work in [49] illustrates that the expansion moves (a graph cuts algo-

rithm) outperforms the other competitive methods in all tested problems in terms

22

Page 44:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

of accuracy and time efficiency. So this technique is used as a minimization tool

in this dissertation. Note that different graph-based energy minimization methods

may use different graph constructions. Also there are different rules for convert-

ing graph cuts into image labeling. For more details see [8, 51]. In this section the

reconstruction of the graph and the rules that are used to minimize an equation

such as Eq. (10) is reviewed.

1. Graphs

The weighted undirected graph G = 〈V , E〉 is a set of vertices V , and a set

of edges E connecting the vertices. Each edge is assigned a nonnegative weight.

The set of vertices V corresponds to the set of image pixels P , and some additional

special nodes called terminals. These terminals correspond to the set of labels that

can be assigned to an image pixels. This work deals only with graphs that have

two terminals. These terminals are usually called the source s and the sink t. An

example of this graph is shown in Fig. 6. The set of edges E consists of two subsets.

The first subset, the n-links, contains edges that connect the neighboring pixels in

the image. The second subset, the t-links, contains edges that connect the pixels

with the terminals. Each edge is assigned a cost. The cost of a t-link connecting

a node and a terminal corresponds to the penalty of assigning the corresponding

label to the pixel. This cost corresponds to the second term in Eq. (10). The cost

of a n-link between two pixels is the penalty of disconnecting them. This cost

corresponds to the first term in Eq. (10).

2. Min-Cut/Max-Flow problems

An s/t cut on a graph G is a set of edges Ec ⊂ E such that terminals are sep-

arated in the induced graph G(Ec) = 〈V , E − Ec〉. The cut divides the set of image

pixels into two disjoint subsets. No proper subset of Ec separates the terminals in

G(Ec). Examples of valid and invalid cuts are shown in Fig. 7. The sum of weights

23

Page 45:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 6 – An example of undirected Graph: Image’s pixels (a-i) are the graph’snodes. n-links is constructed for 4-neighborhood system. t-link connect pixels withterminals.

(a) (b) (c) (d) (e)

FIGURE 7 – Examples of cuts on a graph. (a), (b), and (c) are valid cuts. (d) isinvalid cuts; it do not separate the terminal there exist a path s, a, d, e, h, t. (e) isinvalid cuts; it has a subset (a, d), (b, e), (c, f) gives a valid cut.

of edges, which belong to a cut, is the cut cost |Ec|. The Min-Cut problem is to find

a cut that has the minimum cost among all cuts. Min-Cut/Max-Flow algorithms

in combinatorial optimization show that a globally minimum s/t cut can be com-

puted efficiently in a low-order polynomial time by computing the maximum flow

from s to t [52]. Boykov and Kolmogorov [53] described a modified max-flow algo-

rithm that significantly outperforms the original max-flow techniques. In this dis-

sertation, this algorithm is used to find the minimum cut among all the cuts in the

graph. Since the cut divides the set of image pixels into two disjoint subsets each

set has one terminal, each pixel is assigned a unique label. Therefore, if the edge

weights are properly set based on the energy function parameters, a minimum cost

cut will correspond to a labeling with minimum value of this energy [53].

24

Page 46:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

3. Expansion Moves Algorithm

The expansion moves algorithm was proposed by Boykov et al. [44] to min-

imize an energy function such as Eq. (10) with non binary variables by repeatedly

minimizing an energy function with binary variables using the Max-Flow/Min-

Cut method. It is an effective algorithm for minimizing discontinuity-preserving

energy functions. This algorithm can be applied to pair-wise interactions that are

submodular on the space of labels (e.g., Potts function) [45]. The potential function

V (., .) is submodular if:

V (l1, l2) + V (l2, l3) ≥ V (l1, l3) + V (l2, l2), (14)

holds for all labels l1, l2, and l3 ∈ L. A labeling f is defined to be an α-expansion

move from a labeling f if every pixel either keeps its old label, fp = fp, or switch to

a particular label α, fp = α. Then the algorithm cycles through the labels α in some

order and finds the lowest energy α-expansion move from the current labeling.

The algorithm terminates when there are no moves for any label with lower energy.

The expansion moves algorithm gives a local minimum lies within a multiplicative

factor of the global minimum . This factor depends on the potential function, e.g.

for the Potts model the factor is two [44]. The outline of this algorithm is shown in

Algorithm 2.

Algorithm 2 α-Expansion Move Algorithm [8]

1: Start with any arbitrary labeling f

2: Set success = 0

3: For each label α ∈ L (any order)

find f = arg min E(f) among f within one α-expansion1 of f

if E(f) < E(f) set f = f and success = 1

4: If success = 1 goto 2

5: Return f

25

Page 47:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

F. Extended Roof Duality

As described in the previous section, for the case of binary pairwise MRF

(i.e., L = 0, 1), a global minimum can be computed in polynomial time as a

minimum s/t cut if every pairwise term satisfies

V (0, 1) + V (1, 0) ≥ V (0, 0) + V (1, 1). (15)

However, in many vision applications this submodularity condition is not satis-

fied. Roof duality [48] and its extended version, extended roof duality [9], can be

used to minimize non-submodular functions. Roof duality can be considered as

a generalization of the standard graph cut algorithm. For the submodular func-

tions, the two algorithms give the same answer in almost the same time. For

non-submodular functions, roof duality produces part of an optimal solution. The

extended roof duality algorithm outperforms other algorithms in solving many

problems that have been demonstrated in [9]. Thus in this dissertation, extended

roof duality algorithm is used to minimize functions outside the scope of expan-

sion moves algorithm.

1. Roof Duality

The main idea of this approach is to solve a particular linear programming

relaxation of the energy Eq. (10), where the binary constraints fp ∈ 0, 1 are re-

placed with fp ∈ 0, 1, 12 for every site p ∈ P . Usually, the partial labeling is

defined with fp ∈ 0, 1, ∅ where ∅ means that node is unlabeled. Similar to the

submodular case in the standard graph cut approach, the problem is reduced to

the computation of a minimum s/t cut in a certain graph. However, the size of

the graph is doubled in the non-submodular case. In addition to the spacial nodes

(the source s and the sink t which correspond to labels 0 and 1), for each site p ∈ P ,

two nodes p, p are added to V , (they correspond to the variable fp and its comple-

ment fp = 1 − fp). For each non zero term in energy Eq. (10), two directed edges

26

Page 48:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 8 – The probing method: (a) The output of roof duality with unlabeledsites. (b) and (c) the outputs of roof duality after fixing site p with 0 and 1, respec-tively. It can be drawn that fq is always one, so its optimal label is 1. fr follows fp.Therefore, sites q and r can be eliminated by letting f ∗q = 1 and f ∗r = f ∗p

are added to the graph with a weight that is half the term value. For more details

see [48]. Finally, a minimum s/t cut in this graph, which divides the nodes into

two sets (S,T), gives a partial labeling that can be defined as follows:

fp =

0 ifp ∈ S, p ∈ T

1 ifp ∈ T, p ∈ S

∅ otherwise

. (16)

2. Probing Method

When the number of non-submodular terms is small, the roof duality works

well. However, in more difficult cases it may leave many nodes unlabeled. Many

extensions are proposed to enhance this technique. One of these extensions is the

“probing” method introduced in [9], which can be described as follows. Let f be

the output of the roof duality algorithm with node p unlabeled. By fixing p to 0

and then to 1 and run the roof duality algorithm in each case, two partial labelings

f0 and f1 are generated. Define the set U as follows:

U = [dom(f0) ∩ dom(f1)]− [dom(f) ∪ p],

where dom(f) is the domain of f (the set of labeled nodes). For a global minima f∗

and using the roof duality property [54], the following can be drawn

f ∗p = l ⇒ f ∗q = f lq ∀l ∈ 0, 1, q ∈ U .

Thus, nodes in U can be removed (by fixing or contracting) from the energy with-

out affecting the global minimum. An illustration example is shown in Fig. 8. Fix-

27

Page 49:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

ing a node to 0 and to 1 may label different sets of nodes (i.e., dom(f0) 6= dom(f1)).

In this case to exploit this information, a pairwise term V (fp, fq) is added to the

energy where V (l, 1 − f lq) = Cn (Cn sufficiently large non-negative constant.) and

all other terms are zeros.

The outline of the probing method is summarized in Algorithm 3.

Algorithm 3 Extended Roof Duality Algorithm (Probing Method) [9]1: Run the roof duality algorithm for the given energy.

2: Select unlabeled node p, and fix it to 0 and to 1. Then run the roof duality

algorithm to get f0 and f1. Then compute U .

3: Remove nodes in U by fixing or contracting.

4: Add a directed constraints for all edges (p, q) ∈ E with q ∈ dom(f0) − dom(f1)

or q ∈ dom(f1)− dom(f0).

5: If the energy has changed run the roof duality again and update the unlabeled

nodes.

28

Page 50:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

CHAPTER III

MGRF PARAMETERS ESTIMATION

Fitting an MRF model to an image requires estimating its parameters γq

from a sample of the image. The literature is rich with works that propose dif-

ferent MGRF models, as described in Sec. II.B, which are suitable for a specific

system behavior. Usually, these works identify their models’ parameters using

an optimization technique. This technique tries to maximize either the likelihood

or the entropy of the proposed probability distributions. This chapter proposes

an analytical method to estimate the homogeneous isotropic Potts model for an

asymmetric Gibbs potential function.

• Maximum Likelihood Estimation (MLE) is the most popular estimator used

in estimating unknown parameters of a distribution (e.g., [29]). Define by

Θ the vector of potential parameters (e.g., for a homogeneous anisotropic

pairwise Potts model Θ = [γ1, γ2, γ3, γ4] and for a homogeneous anisotropic

Potts model with triple cliques Θ = [γ5, γ6, γ7, γ8]). The Gibbs probability

distribution can be represented as a function in Θ as follows:

P (f) =1

Zexp (U(f , Θ)) , (17)

and the log-likelihood function is defined by

L(f |Θ) =1

|P| log P (f). (18)

Thus, the maximum log-likelihood estimator can be defined by

Θ∗ = arg maxΘ

(U(f , Θ)− log(Z(Θ))) . (19)

Equation (19) can be solved by the differentiation of the log-likelihood. How-

ever, the second term log(Z(Θ)) is intractable. Thus, numerical techniques

are usually used to find a solution for this problem.

29

Page 51:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 9 – Besag’s scheme for coding sites (a) first order model and (b) secondorder model

A. Related Works

In this section some popular methods used to estimate the parameters for

MGRF models are discussed .

1. Coding Estimation

The coding method was proposed by Besag [28]. In this method, the image

grid is partitioned into coding patterns. The codings are chosen such that a pixel

and its neighbors cannot be members of the same coding pattern. This implies that

the distribution of the pixel values within one coding pattern are independent on

the pixel values of the other coding patterns. In order to get an efficient estimator,

the number of coding patterns should be as low as possible. Thus the efficient

coding of a first-order MGRF consists of two patterns (checkerboard) shown in Fig.

9 (a), and of a second-order MRF consists of four patterns as shown in Fig. 9 (b).

All pixels coded j are used for jth set of parameter estimates, j = 1, 2, 3, 4. Using

this coding and MRF properties, the colors of sites in each coding are conditionally

independent

P (fp, fq | fNp , fNp) = P (fp | fNp)P (fq | fNp).

30

Page 52:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

The coding method estimates the vector parameters Θ by finding the vector Θj

which maximizes the log-likelihood in coding j

Lj(Θ) =∑p∈ Pj

log

exp(−U(f , Θ))∑

l∈ Lexp(−U(l, Θ))

, (20)

where Pj is the set of pixels that have the code j. After optimizing Lj(Θ), the

estimated vector for the second order model is defined as follows

Θ =1

4

4∑j=1

Θj. (21)

2. Least Square Error method (LSQR)

This method was proposed by Derin and Elliot [31], the corresponding model

is described in Sec. II.B.1. They established different 3 × 3 label blocks of pixels.

For a pixel p with a label fp and the 8 neighborhood Np, the block is (fp, fNp). Each

different 3 × 3 block of labels establishes a block type. Define l1, l2 as labels of a

particular pixel p with a neighborhood Np. One can formulate the following equa-

tion

∑q∈ Np

(V (l1, fq)− V (l2, fq)) = logP (l2|fNp) + ε

P (l1|fNp) + ε, (22)

where ε is a small number (e.g., 1512

) to avoid zero probability. The ratio P (l2|fNp)

P (l1|fNp)

is estimated by counting the number of blocks of type (l2, fNp) and dividing by

the number of blocks of type (l1, fNp). A second-order binary MGRF has 256 such

equations. In order to estimate the model parameters using least square methods,

one needs to solve this overdetermined system of linear equations.

3. Parameter Estimation Using Co-occurrence Probability

Cremers and Grady [33] computed the Gibbs energy U(f) from the his-

tograms of joint co-occurrence of label pairs (or triplets). They assumed that the

31

Page 53:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

co-occurrence probability for any two variables (or three variables) does not de-

pend on other variables. Under this assumption they simplified the Gibbs energy

in pairwise case to the form

U(f) = − 1

Γ

p 6=q∈PP (fp, fq), (23)

where the constant Γ =(

n2

)denotes the number of ways to generate such pairings

divided by the number of times each pair appears in the overall product. Then

potential parameters γl1l2pq are related to the probability of co-occurrence of labels l1

and l2 as follows:

γl1l2pq = − log P (fp = l1 ∩ fq = l2). (24)

4. Analytical method for Potts model

Farag et al., [1] proposed an analytical approach to estimate the parameter

of a homogenous isotropic MGRF Potts model. They defined the potential function

of Potts model governing symmetric pairwise co-occurrences of the region labels

as V = V (l1, l2) = γ if l1 = l2 and V (l1, l2) = −γ if l1 6= l2: l1, l2 ∈ L. To identify

the homogeneous isotropic Potts model that describes the label image f , they need

to estimate only the potential value γ. This parameter was obtained analytically

using the Maximum Likelihood Estimator (MLE) for a generic MGRF [2]. Hence,

the potential interaction is given by the following equation:

γ =K2

2(K − 1)

(Feq(f)− 1

K

), (25)

where Feq(.) denotes the relative frequency of the equal labels in the pixel pairs.

5. Others

Many different approaches were proposed to estimate the MGRF parame-

ter. To estimate the unconditional probability distribution P (f), Olga [8] discussed

different types of potential function V (., .), see Fig. 10. In all these forms the po-

32

Page 54:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 10 – Different types of potential function

tential parameter was set by hand. Boykov and Funka-Lea [5], estimated potential

parameters of the Potts model using simple function that is inversely proportional

to the gray level difference between the two pixels and to their distance as follows:

γp,q ∝ exp

(−(Ip − Iq)

2

2σ2c

)· 1

dist(p, q), (26)

where σc is estimated as camera noise. Many other works [4, 55, 56] used the same

criteria. Usually, potential parameter of Potts model is chosen based on local in-

tensity gradient, Laplacian zero-crossing, gradient direction, geometric, or other

criteria. However, these models depend on parameters which must be set by hand.

B. The Proposed Approach For Parameter Estimation

Unlike common computer vision studies, this work adopts the pairwise and

triple homogenous isotropic MGRF model to be the image model with Potts prior.

Similar to Farag et al. [1], the parameter of this model is analytically estimated.

However, this work focuses on asymmetric pairwise co-occurrences of the region

labels. The asymmetric Potts model is chosen to provide more chances to guar-

antee that the Gibbs energy function is submodular, so it can be minimized using

a standard graph cuts approach in polynomial time. In this case, the Gibbs po-

tential governing asymmetric pairwise co-occurrences of the region labels can be

described as follows:

V (fp, fq) =

0 if fp = fq,

γ otherwise.. (27)

33

Page 55:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

Then the MGRF model of region maps is specified by the following Gibbs proba-

bility distribution:

P (f) =1

Zexp

(−

p,q∈NV (fp, fq)

);

=1

Zexp

(− γ|T |Fneq(f)). (28)

Here, T = p, q : p, q ∈ P ; p, q ∈ N is the family of the neighboring pixel pairs

supporting the Gibbs potentials, |T | is the cardinality of that family, and Fneq(f) de-

notes the relative frequency of the not equal labels in the pixel pairs of that family:

Fneq(f) =1

|T |∑

p,q∈Tδ(fp 6= fq), (29)

where, the indicator function, δ(A) equals 1 when the condition A is true, and zero

otherwise. To completely identify the Potts model that describe the label image f ,

the potential value γ have to be estimate.

1. Pairwise Clique Potential Estimation

To estimate the model parameter γ that specifies the Gibbs potential, the

MGRF model is identified using a reasonably close first approximation of the max-

imum likelihood estimation of γ. It is derived in accordance with [2] from the

log-likelihood

L(f |γ) =1

|P| log P (f). (30)

Using Eq. (28), the partition function Z can be written as follows:

Z =∑

f∈Fexp

(− γ|T |Fneq(f)). (31)

Then the log-likelihood of Eq. (30) can be rewritten as follows:

L(f |γ) = −γρFneq(f)− 1

|P| log∑

f∈Fexp

(− γ|T |Fneq(f)), (32)

34

Page 56:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

where ρ = |T ||P| . The approximation is obtained by truncating the Taylor’s series

expansion of L(f |γ) to the first three terms in the close vicinity of the zero potential,

γ = 0:

L(f |γ) ≈ L(f |0) + γdL(f |γ)

∣∣∣∣γ=0

+1

2γ2 d2L(f |γ)

dγ2

∣∣∣∣γ=0

. (33)

The first derivative of the log-likelihood Eq. (32) is given by

dL(f |γ)

dγ= −ρFneq(f) + ρ

∑f∈F

Fneq(f) exp(− γ|T |Fneq(f)

)

∑f∈F

exp(− γ|T |Fneq(f)

)

= −ρFneq(f) + ρEFneq(f)|γ, (34)

where E. denotes math expectation. By replacing Fneq(.) with 1− Feq(.), the first

derivative becomes:

dL(f |γ)

dγ= −ρ

(1− Feq(f)

)+ ρE(1− Feq(f)

)|γ = ρ(Feq(f)− EFeq(f)|γ

). (35)

If γ = 0, this MGRF becomes the Independent Random Field (IRF) of of equiprob-

able K labels. Every label has the same probability 1/K, and the expectation can

be computed as follows:

EFeq(f)|0 =1

|T |∑

p,q∈TEδ(fp = fq) =

1

|T | |T |Eδ(fp = fq) =1

K. (36)

Thus in the vicinity of the origin γ = 0, the first derivative of the log-likelihood is

equal todL(f |γ)

dγ|γ=0 = ρ

(Feq(f)− 1

K

). (37)

The second derivative of the log-likelihood is given by

d2L(f |γ)

dγ2= −ρ2|P|

∑f∈F

F2neq(f) exp

(− γ|T |Fneq(f)) ∑

f∈Fexp

(− γ|T |Fneq(f))

(∑f∈F

exp(− γ|T |Fneq(f)

))2

− ρ2|P|

∑f∈F

Fneq(f) exp(− γ|T |Fneq(f)

)

∑f∈F

exp(− γ|T |Fneq(f)

)

2

.

35

Page 57:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

In a similar way, in the vicinity of the origin γ = 0, the second derivative of the

log-likelihood is equal to

d2L(f |γ)

dγ2|γ=0 = −ρ2|P|

(EFneq(f)

2|γ − E2Fneq(f)|γ)

(38)

= −ρ2|P| varFneq(f)|γ = −ρ2|P| var(1− Feq(f))|γ

= −ρ2|P| varFeq(f)|γ.

For the IRF the frequency variance can be estimated as follows:

varFeq(f)|0 = EFeq(f)2|0 − E2Feq(f)|0

= E 1

|T |∑

p,q∈Tδ(fp = fq)

2

− 1

K2

=1

|T |2E

p,q∈Tδ(fp = fq)

+∑

p,q∈Tδ(fp = fq)

i6=p,j 6=q∈Tδ(fi = fj)

− 1

K2

=1

|T |2(|T | 1

K+ |T |(|T | − 1)

1

K2

)− 1

K2

=1

ρ|P|K − 1

K2. (39)

Thus in the vicinity of the origin, the the second derivative of the log-likelihood is

equal to

d2L(f |γ)

dγ2|γ=0 = −ρ2|P| varFeq(f)|0 = −ρ

K − 1

K2. (40)

Finally, the approximated log-likelihood Eq. (33) becomes

L(f |γ) ≈ −|P| log K + ργ(Feq(f)− 1

K

)− 1

2γ2ρ

K − 1

K2. (41)

For the approximate log-likelihood of Eq. (41), let dL(f |γ)dγ

= 0. This results in the

following approximate MLE of γ:

γ∗ =K2

K − 1

(Feq(f)− 1

K

)

=K2

K − 1

(1− Fneq(f)− 1

K

)

=K2

K − 1

(K − 1

K− Fneq(f)

). (42)

36

Page 58:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

2. Triple Clique Potential Estimation

The Gibbs potential governing asymmetric triple co-occurrences of the re-

gion labels can be described as follows:

V (fp, fq, fr) = γ(1− δ(fp = fq = fr)). (43)

Following the same method used in pairwise potentials, one can prove that the

potentials of the third order cliques have the same analytical form of Eq. (42) but

with the frequency

Fneq(f) =1

|T |∑

p,q,r∈T(1− δ(fp = fq = fr)), (44)

where T = p, q, r : p, q, r ∈ P ; p, q, r ∈ N is the family of the neighboring

pixel triples supporting the Gibbs potentials.

C. Experiments

The robustness of the proposed method for estimating Gibbs potentials of

the Potts model is tested by applying it on simulated texture images with known

potential values. The simulated texture images is generated using Gibbs sampler

approach [10] which is explained in Algorithm 4. The idea of the synthesis process

is to find the configuration f in F which maximizes the probability P (f). The ad-

vantage of Algorithm 4 is that it eliminates the need for computing the partition

function.

To assess the robustness of the proposed approach, many experiments are

conducted. In the first experiment, four binary different realizations of homoge-

neous isotropic Potts model are generated. Samples of these realizations for images

of size 128 × 128 are shown in Fig. 11. To get accurate statistics, 100 realizations

are generated from each type. The proposed method is used to estimate the model

parameter γ for these data sets. The means and the variances (written between

parentheses) of the 100 realizations for each type are shown in Table (1).

37

Page 59:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

Algorithm 4 Gibbs Sampler Algorithm [10]1: Start with any random labeling f

2: for all p ∈ P do

3: Choose l ∈ L at random and let fp = l, and fq = fp for all q 6= p

4: Let P = min1, P (F = f)/P (F = f).

5: Replace f by f with probability P .

6: end for

7: Repeat (2) Niter times

(a) γ = 0.1 (b) γ = 0.75

(c) γ = 1.0 (d) γ = 1.75

FIGURE 11 – Samples of synthesized binary images of size 128× 128

TABLE 1ACCURACY OF THE PROPOSED PARAMETER ESTIMATION METHOD FOR

BINARY IMAGES OF SIZE 128× 128

Actual parameter γ 0.1 0.75 1.0 1.75

Proposed γ∗ 0.12 (0.009) 0.77 ( 0.014) 1.04 ( 0.013) 1.78 (0.013)

CM 0.94 (0.003) 1.02 ( 0.0) 1.09 ( 0.011) 1.79 (0.066)

LSQR 0.11 (0.016) 0.64 ( 0.041) 0.85 ( 0.054) 1.79 (0.091)

38

Page 60:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) γ = 0.5 (b) γ = 5

(c) γ = 10 (d) γ = 25

FIGURE 12 – Samples of synthesized images of size 64× 64

TABLE 2ACCURACY OF THE PROPOSED PARAMETER ESTIMATION METHOD FOR

IMAGES OF SIZE 64× 64

Actual parameter γ 0.5 5 10 25

Proposed γ∗ 0.5 (0.056) 5.3 (0.11) 9.9( 0.15) 25.1 (0.21)

CM 0.43 (0.032) 12.3 (0.1) 13.2( 0.11) 23.9 (2.0)

In the second experiment, four different realizations of Potts model are gen-

erated with 32 colors (i.e., K = 32). Samples of these realizations for images of size

64 × 64 are shown in Fig. 12. Also, 100 realizations are generated from each type.

The proposed method is used to estimate the model parameter γ for these data

sets. The means and the variances of the 100 realizations for each type are shown

in Table (2). Figures 13, 14, and 15 show more samples of the four realizations for

different image sizes. The estimation results are shown in Tables (3), (4), and (5),

respectively. For a comparison purpose, the estimations of the Coding Method

(CM) [28] and Least Square Error method (LSQR) [31] are also illustrated. CM al-

lows for an easy formulation of the estimator for auto-binomial model. However,

it is generally considered difficult to use reliably [31]. Picard in [57] confirmed that

CM’s performance varies widely for different data, which means CM works well

for some images (e.g., Figures 11-15 c and d), but poorly for others (e.g., Figures

11-15 b). Also, Picard in [57] mentioned that CM’s estimations sometimes need

39

Page 61:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) γ = 0.5 (b) γ = 5

(c) γ = 10 (d) γ = 25

FIGURE 13 – Samples of synthesized images of size 128× 128

TABLE 3ACCURACY OF THE PROPOSED PARAMETER ESTIMATION METHOD FOR

IMAGES OF SIZE 128× 128

Actual parameter γ 0.5 5 10 25

Proposed γ∗ 0.51 (0.029) 5.4 (0.05) 10.1 ( 0.06) 25.7 (0.11)

CM 0.43 (0.017) 12.3 (0.07) 13.1 ( 0.07) 22.7 (0.83)

40

Page 62:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) γ = 0.5 (b) γ = 5

(c) γ = 10 (d) γ = 25

FIGURE 14 – Samples of synthesized images of size 256× 256

TABLE 4ACCURACY OF THE PROPOSED PARAMETER ESTIMATION METHOD FOR

IMAGES OF SIZE 256× 256

Actual parameter γ 0.5 5 10 25

Proposed γ∗ 0.51 (0.016) 5.5 (0.02) 10.2 ( 0.03) 25.7 (0.05)

CM 0.43 (0.01) 12.3 (0.03) 13.1 ( 0.04) 22.34 (0.36)

to be adjusted upward. In this experiment, CM estimations of Potts model pa-

rameters with 32 colors are adjusted by 10. Another disadvantage of CM is that

the used optimization technique requires an initial guess for the solution and con-

vergence conditions and may run into local optima without reaching the proper

solution [11] as shown in cases of binary realizations for γ = 0.1 and 0.75, and for

32 colors realization for γ = 5. Derin and Elliott in their implementation LSQR,

claimed accurate estimation by using the most frequently occurring blocks types,

however, there is no consistent way to establish these types of blocks [11]. Also,

as described in Sec.III.A.2, for binary realizations (e.g., Figures 11) to estimate the

41

Page 63:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) γ = 0.5 (b) γ = 5

(c) γ = 10 (d) γ = 25

FIGURE 15 – Samples of synthesized images of size 512× 512

TABLE 5ACCURACY OF THE PROPOSED PARAMETER ESTIMATION METHOD FOR

IMAGES OF SIZE 512× 512

Actual parameter γ 0.5 5 10 25

Proposed γ∗ 0.51 (0.008) 5.5 (0.01) 10.3( 0.01) 25.9 (0.03)

CM 0.43 (0.004) 12.3 (0.01) 13.03( 0.02) 22.2 (0.17)

42

Page 64:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

model parameters using LSQR, one needs to solve an overdetermined system of

linear equations with up to 28 = 256 equations. This is not paractical in the case of

realizations with 32 colors, where the overdetermined system of equations has up

to 328 equations.

The proposed algorithm is tested on anisotropic Potts models. Four differ-

ent Potts models with parameters [γ1 = 25, γ2 = γ3 = γ4 = 0], [γ2 = 25, γ1 = γ3 =

γ4 = 0], [γ3 = 25, γ1 = γ2 = γ4 = 0], and [γ4 = 25, γ1 = γ2 = γ3 = 0] and with

32 colors are generated. Samples of these realizations for images of size 128 × 128

are shown in Fig. 16. Also, 100 realizations are generated from each type, and

the proposed method is used to estimate the model parameters for these data sets.

The means and the variances of the 100 realizations for each type are shown in the

figure.

In the last experiment, two different realizations of Potts model with triple

cliques are synthesized, samples are shown in Fig. 17. The means and the vari-

ances of the estimated parameters of 100 samples from each type are also shown

in Fig. 17.

D. Conclusions

This chapter proposed an analytical method to estimate the homogeneous

isotropic Potts model with asymmetric Gibbs potential function. The experiments

showed that the proposed analytical estimates of the MGRF parameters outper-

formed the classical methods (e.g., CM and LSQR). Also, the proposed approach

was tested in an anisotropic model and performed well. The statistical results

highlighted the robustness of the proposed analytical estimation approach over

the conventional methods. This accurate identification of the MGRF model will

demonstrate promising results in segmentation problem as will be discussed in

detail in the following chapters.

43

Page 65:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b)

(c) (d)

FIGURE 16 – Results of the proposed method for estimating anisotropic Pottsmodel parameters for images of size 128×128: (a) [25.6(0.05) 0.003(0.08) 0.001(0.08)0.002(0.08)], (b) [0.003(0.09) 25.6(0.05) 0.004(0.09) 0.001(0.09)], (c) [0.001(0.08)0.001(0.09) 0.005(0.09) 25.9(0.05)], and (d) [0.01(0.09) 0.01(0.09) 25.9(0.05) 0.01(0.09)]

44

Page 66:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) γ∗ = 5.02(0.09) (b) γ∗ = 9.98(0.12)

FIGURE 17 – Samples of synthesized images with 32 colors and high order cliques(a) Sample of realizations generated with γ = 5 and (b) Sample of realizationsgenerated with γ = 10

45

Page 67:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

CHAPTER IV

A NOVEL UNSUPERVISED GRAPH CUTS APPROACH FOR N-DMULTIMODAL IMAGE LABELING

This chapter proposes a new unsupervised MAP-based labeling (image seg-

mentation and image restoration) framework of N-D multimodal gray scale im-

ages. As described in Sec. II.C.1, the input image and its desired map (labeled

image) are described by a joint Markov-Gibbs random field model of indepen-

dent image signals and interdependent region labels. However, the main focus

in the proposed approach is on more accurate model identification for the MGRF

model and the gray levels distribution model. The parameter of the MGRF model

is analytically estimated as described in the previous chapter III.B. In this chap-

ter, Sec.IV.B introduces an accurate model of the gray levels distribution where the

gray levels distribution of the given image is approximated by a Linear Combina-

tion of Gaussians (LCG). In order to make the approach unsupervised, Sec. IV.B.1

proposes a new technique based on maximizing a new joint likelihood function to

estimate the number of classes in the given image. An initial labeling (pre-labeled

image) is generated using the LCG-model. Then the α-expansion move Algorithm

2 iteratively refines the initial labeled image by using the MGRF with analytically

estimated potential. Experimental results show that the developed technique gives

promising accurate results compared to other known algorithms.

A. Introduction

Image labeling, segmentation and restoration, is one of the most important

low-level computer vision tasks. This chapter addresses the problem of accurate

unsupervised labeling of multimodal gray scale images, where each region of in-

46

Page 68:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

terest relates to a single dominant mode (or peak) of the empirical marginal prob-

ability distribution of gray levels. The goal of the proposed algorithm is to extract

the major regions (e.g. classes, patches, objects) of the given multimodal image

while ignoring the small intra-region variations, which is known as image labeling.

Recently, energy-based algorithms appeared as robust image labeling ap-

proaches. Similarly, the proposed approach uses graph cuts technique to minimize

the energy function that is discussed in Sec. II.C.1:

E(f) =∑

p,q∈NV (fp, fq) +

∑p∈P

D(fp). (45)

The literature is rich with image labeling techniques. However, only some whose

basics depend on the energy optimization are discuss here. Greig et al. [58] dis-

covered the power of graph cuts algorithms from combinatorial optimization, and

showed that graph cuts can be used for binary image restoration. The problem

was formulated as MAP estimation of a MRF. Shi and Malik [6] proposed the nor-

malized cut criteria, an unbiased measure of both the total dissimilarity between

the different image regions as well as the total similarity within the image regions,

for graph partitioning. To compute the minimum cut, which corresponds to op-

timum segmentation, they solved an eigenvalue system. Boykov and Jolly [55]

proposed a framework that uses s/t graph cuts to get a globally optimal object

extraction method for N-dimensional images. They minimized a cost function

which combines region and boundary properties of segments as well as topolog-

ical constraints. That work illustrated the effectiveness of formulating the object

segmentation problem via graph cuts. Since Boykov and Jolly introduced their

graph cuts segmentation technique in their paper [55], it became one of the lead-

ing approaches in interactive N-D image segmentations, and many publications

extended this work in different directions. Blake et al. [59] used a mixture of the

Markov-Gibbs random field (MGRF) to approximate the regional properties of seg-

ments and the spatial interaction between segments. Geo-cuts [60] combines ge-

ometric cues with energy function. GrabCut [61] reduces the human interaction

47

Page 69:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

by using the iterative graph cut approach. Obj-cuts [62] combines the object de-

tection with the segmentation, and incorporates the global shape priors in MRF.

To overcome the time complexity and memory overhead of Boykov and Jolly’s ap-

proach for high resolution data, Lombaert et al. [63] performed graph cuts on a

low-resolution image/volume and propagated the solution to the next higher res-

olution level by only computing the graph cuts at that level in a narrow band sur-

rounding the projected foreground/background interface. Instead of minimizing

the energy function Eq. (45) using Max-flow/Min-cut method, Keuchel [64] solved

the multiclass image labeling problem using a semidefinite relaxation technique.

This technique makes the energy form less restrictive, and the shape concept is

imposed into the energy function. However, this increases the computational time

dramatically.

Although interactive segmentation imposes some useful topological con-

straints, it depends on the user inputs which highly affects the labeling results.

Unlike previous graph cuts based segmentation and restoration techniques, in the

proposed approach, no user interaction is needed; instead, the image is initially

pre-labeled using its gray levels. Indeed, to model the low level information in the

given image, the gray levels distribution of this image is precisely approximated

with a linear combination of Gaussian distributions with positive and negative

components. One of the contributions of this work is that the number of dominant

modes in the LCG model (number of classes in the given multimodal image) is

determined by using a new technique based on maximizing a new joint likelihood

function. To overcome the intra-region variations, the proposed approach does not

depend only on the image gray levels but it uses the graph cuts approach to com-

bine the image gray levels information and the spatial relationships between the

region labels. As explained in Sec. III.A.5, the potentials of Potts model, which de-

scribe the spatial pairwise interaction between two neighboring pixels, are usually

estimated using simple functions that are proportional to the gray levels difference

between the two pixels and inversely proportional to their distance. Unlike these

48

Page 70:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

conventional techniques, in this dissertation the potentials of Potts model are es-

timated using a new analytical approach which is presented in Sec. III.B. After

the image is initially labeled, the energy function Eq. (45) is formulated using both

image appearance models (LCG and MGRF models). This function is minimized

using a multi-way graph cuts Algorithm 2, described in Sec. II.E.3, to get the final

and optimal segmentation of the input image.

B. The Conditional Image Model

As discussed in Sec. II.C.1 to solve the labeling problem, one needs to esti-

mate the unconditional P (f) and the conditional P (I|f) image models. The former

is completely identified by estimating the parameter of MGRF as presented in Sec.

III.B. The latter is discussed in this section.

Many works were presented in the computer vision field to identify this

model. Some of these related works are reviewed in this section. To restore the

original image from a noisy version, Olga [8] estimate the conditional distribution

of the noisy image given the map as follows

P (Ip | fp) = Ap · exp(−Dp(fp)), (46)

where Ap is normalizing constant, and Dp(fp) = (Ip − fp)2. In her work, she as-

sumed that the number and the values of the labels are known. To segment an

object from its background, in the works by Boykov et al., [4, 5, 55, 56], the user

manually selects some seeds, as shown in Fig. 18. They used the intensity of these

seeds to estimate the conditional distributions of the object and the background.

Blake et al. [59] made the user to draw a fat pen trail enclosing the object bound-

ary. Therefore; the image is classified to object, background, and unknown region.

They use this information to estimate the conditional distribution using Gaussian

mixture Markov random field model. Although user interaction imposes some

useful topological constraints, it depends on the user inputs which highly affects

the labeling results as shown in Fig. 19. Unlike these techniques, in this work

49

Page 71:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 18 – User seed for kidney slice CE-MR angiography. courtesy of Boykovand Jolly [4]

FIGURE 19 – The effect of the user input on the final labeled image. courtesy ofBoykov and Funka-Lea [5]

the conditional distribution is estimated from the given multimodal image data,

intensity distribution.

To accurately estimate this conditional distribution P (I|f), the gray levels

marginal density of each class is approximated using a LCG with Cp,l positive and

Cn,l negative components as follows:

P (Ip|fp) = P (g|l) =

Cp,l∑r=1

wp,r,lϕ(g|θp,r,l)−Cn,l∑s=1

wn,s,lϕ(g|θn,s,l), (47)

where, ϕ(g|θ) is a Gaussian density with parameter θ (mean µ and variance σ2),

wp,r,l denotes the rth positive weight in class l, wn,s,l denotes the sth negative weight

in class l. The summation of these weights is one:∑Cp,l

r=1 wp,r,l −∑Cn,l

s=1 wn,s,l = 1. In

order to estimate the parameters of the LCG model, the modified EM algorithm [1]

is used to deal with the positive and negative components. In the modified EM

algorithm [1], the number of classes K and the initial parameters of its dominant

modes are set manually. In this dissertation, these parameters are estimated by a

new technique described in the following section.

50

Page 72:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

1. Dominant Modes Estimation

To complete the proposed modeling, one needs to estimate the number of

image classes. Assume for any given multimodal image that its number of classes

is equal to the number of dominant modes (peaks in the image gray levels fre-

quency distribution), and each dominant mode is roughly approximated with a

single Gaussian distribution. In this dissertation, a new technique is developed

using Akaike Information Criterion (AIC)-type criterion [65] to estimate the num-

ber of classes in the given multimodal image. The main idea behind this technique

is that the image is described by a mixture of Gaussian distributions and the num-

ber of dominant modes is estimated by finding the minimum number of Gaussian

distributions that maximizes the likelihood function of this model. Consider the

likelihood function of this model is defined as

`(θ, I) =∏p∈P

k∑j=1

πj ϕ(θj, Ip), (48)

where k is number of components, the prior π’s are constrained by∑k

j πj = 1. Let

∆pj ∈ 0, 1 be a set of indicator variables for mixture components independent of

the input I. Note∑k

j ∆pj = 1 as well as ∆pj are independent for distinct pixels and

P (∆pj = 1 | I) =πjϕ(θj ,Ip)Pk

j=1 πjϕ(θj ,Ip). Given the set of indicators ∆ = ∆pj and the input

I the complete log-likelihood is given by

L(θ, ∆, I) =∑p,j

∆pj log ϕ(θj, Ip). (49)

Since ∆ is actually unknown the “partial” log-likelihood is suggested to describe

the mixture models:

L(θ, I) =∑p,j

∆pj log ϕ(θj, Ip), (50)

where ∆pj is the posterior probability of the label j given the input image. And it is

defined as ∆pj = P (∆pj = 1 | I). Given model component penalty N, the “partial”

51

Page 73:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

likelihood function leads to a “partial” AIC (pAIC)

pAIC ∝∑p,j

∆pj log ϕ(θj, Ip)−N(k + 1)

=∑p,j

∆pj( log ϕ(θj, Ip)−N(k + 1)/n).= D(k). (51)

Sufficient Condition for Monotonicity of pAIC Let πj =∑

p ∆pj/n. For

given values of the parameter π, θ and ∆ one would like to increase RHS of Eq.

(51) by assigning minjπj = 0 and re-weighting remaining (k − 1) π’s so as to sat-

isfy the constrain∑

j πj = 1. This could be then later used in the iterative steps of

the EM-type procedure. Re-label the mixtures so as to have minjπj = π1. Denote

modified D(k) by D(k − 1), A = minp,j|j≥2 log ϕ(θj, Ip), and B = maxp log ϕ(θ1, Ip).

Note∑

p,j|j≥2 ∆pj = n(1− π1) and denote log(ϕ(θj, Ip)) by ϕpj . Consider

D(k − 1)−D(k)

=∑

p,j|j≥2

∆pj[ϕpj −Nk/n

1− π1

− ϕpj + N(k + 1)/n]

−∑

p

∆p1[ϕp1 −N(k + 1)/n]

=∑

p,j|j≥2

∆pj[ϕpj −Nk/n

1− π1

− ϕpj + N(k + 1)/n]

−∑

p

∆p1ϕp1 + N(k + 1)π1

≥ (A−Nk/n

1− π1

)(1− π1)nπ1 − nBπ1 + N(k + 1)π1

= nπ1(A−B) + N. (52)

Thus if the condition

π1(A−B) + N/n ≥ 0, (53)

is satisfied then D(k − 1) − D(k) ≥ 0 and the pAIC is increased as a result of the

adjustment. The proposed algorithm is summarized in Algorithm 5.

To emphasize the ability of the pAIC algorithm in detecting the number of

classes in the multimodal images, the proposed pAIC algorithm is tested using

52

Page 74:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

Algorithm 5 pAIC-EM Algorithm

1: Initialize the estimates of the model parameters π, θ over-fitting the number of

mixtures k

2: Perform the expectation step of the EM algorithm

3: For the smallest π check the condition Eq. ( 53).

If it is satisfied, remove the corresponding component and adjust the remain-

ing π’s,

otherwise do nothing

4: Perform the maximization step of EM

5: Repeat 2-4 until pAIC does not change by more than pre-specified error

different multimodal images. Figures (20-23) show samples of pAIC results for bi-

modal, 3-modal, 4-modal, and 5-modal synthetic images, and illustrate that each

log likelihood is maximum at the correct number of classes. Since the synthetic im-

ages come from Gaussian mixture distributions, the resultant distributions which

are created by approximating only the dominant modes of the probability density

function is almost sufficient to give accurate solutions.

However, for real images this is not the case, so a more accurate model is

needed. The latter is the LCG model with positive and negative components. In

Fig. 24, (a) and (b) show the output of the pAIC-EM algorithm for a synthetic

tri-modal image that is generated using a Gaussian mixture with positive and neg-

ative components. (c) shows the normalized absolute error between the empirical

and estimated densities. (d) shows the dominant component generated by pAIC-

EM and the refining components, positives and negatives, generated by the mod-

ified EM algorithm. (e) shows the empirical and estimated densities. Finally (f)

shows the marginal densities with the best thresholds. The proposed algorithm

was tested on real images. Fig. 25 shows a typical human chest Computer Tomog-

raphy (CT) slice (a), its empirical marginal grey levels distribution approximated

with the dominant normal mixture (b), and the log likelihood maximum at 2 (c)

53

Page 75:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

0 50 100 150 200 2500

0.002

0.004

0.006

0.008

0.01

0.012

0.014

Gray Lavel

− − − − Gaussian Component . . . . Estimated Density

−−−−−− Empirical Density

1 2 3 4 5 6

103.2

103.3

Gaussian Components

Log

Like

lihoo

d

(a) (b)

FIGURE 20 – pAIC result for a bimodal synthetic image (a)Empirical and estimateddensities, and the 2 Gaussian components. (b) The log likelihood ( maximum at 2).

0 50 100 150 200 2500

0.002

0.004

0.006

0.008

0.01

0.012

Gray Level

−−−−−− Empirical density . . . . Gaussian Components − − − − Estimated Density

1 2 3 4 5

103.14

103.15

103.16

103.17

103.18

103.19

103.2

Gaussian Components

Log

Like

lihoo

d

(a) (b)

FIGURE 21 – pAIC result for a 3-modal synthetic image (a)Empirical and estimateddensities, and the 3 Gaussian components. (b) The log likelihood ( maximum at 3).

54

Page 76:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

0 50 100 150 200 2500

0.002

0.004

0.006

0.008

0.01

0.012

0.014

Gray Level

−−−−− Empirical Density − − − Gaussian Components . . . . Estimated Density

1 2 3 4 5 6

103.17

103.18

103.19

103.2

Gaussian Compnents

The

Log

Lik

elih

ood

(a) (b)

FIGURE 22 – pAIC result for a 4-modal synthetic image (a)Empirical and estimateddensities, and the 4 Gaussian components. (b) The log likelihood ( maximum at 4).

0 50 100 150 200 2500

0.002

0.004

0.006

0.008

0.01

0.012

Gray Level

−−−−−− Empirical Density . . . . Gaussian Components − − − Estimated Density

1 2 3 4 5 6

103.16

103.17

Gaussian Components

Log

Like

lihoo

d

(a) (b)

FIGURE 23 – pAIC result for a 5-modal synthetic image (a)Empirical and estimateddensities, and the 5 Gaussian components. (b) The log likelihood ( maximum at 5).

55

Page 77:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

0 50 100 150 200 2500

0.002

0.004

0.006

0.008

0.01

0.012

0.014

− Empirical Density − − Gaussian Components ... Estimated Density

1 2 3 4 5 6 7800

900

1000

1100

1200

1300

1400

1500

Gaussian Components

Log

Like

lihoo

d

(a) (b)

0 50 100 150 200 2500

0.002

0.004

0.006

0.008

0.01

0.012

0.014

0.016

0.018

0 50 100 150 200 250−4

−2

0

2

4

6

8

10

12x 10

−3

(c) (d)

0 50 100 150 200 2500

0.005

0.01

0.015

− Empirical Density −−− Estimated Density

0 50 100 150 200 2500

0.005

0.01

0.015

t1=88

t2=181

(e) (f)

FIGURE 24 – Non Gaussian 3-class result: (a) and (b) show the output of the pAIC-EM algorithm, (c) shows the normalized absolute error between the empirical andestimated densities, (d) shows the dominant component generated by pAIC-EMand the refining components, positives and negatives, generated by the modifiedEM algorithm, (e) shows the empirical and estimated densities, and (d) shows themarginal densities with the best thresholds.

56

Page 78:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

0 50 100 150 200 2500

0.005

0.01

0.015

− Empirical Density − − − Gaussian Components . . . Estimated Density

(a) (b)

1 2 3 4 5

103.1

103.2

Gaussian Components

Log

Like

lihoo

d

0 50 100 150 200 250−2

0

2

4

6

8

10

12

14x 10

−3

(c) (d)

0 50 100 150 200 250−2

0

2

4

6

8

10

12

14

16x 10

−3

− Empirical Density − − Estimated Density

0 50 100 150 200 2500

0.005

0.01

0.015

t=109

(e) (f)

FIGURE 25 – Result for CT Lung slice: (a)The CT slice, (b) and (c) the output ofthe pAIC-EM algorithm, (d) the dominant component generated by pAIC-EM andthe refining components, positives and negatives, generated by the modified EMalgorithm, (e) the empirical and estimated densities, and (f) the marginal densitieswith the best threshold.

57

Page 79:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

0 50 100 150 200 2500

0.002

0.004

0.006

0.008

0.01

0.012

0.014

0.016

0.018

− Emprecal Density

−. Estimated Density

−− Gaussian Components

(a) (b)

1 2 3 4 5 6 7−1790

−1780

−1770

−1760

−1750

−1740

−1730

−1720

−1710

−1700

Gaussian Components

Log

Like

lihoo

d

0 50 100 150 200 250−5

0

5

10

15

20x 10

−3

(c) (d)

0 50 100 150 200 2500

0.002

0.004

0.006

0.008

0.01

0.012

0.014

0.016

0.018

− Empirical Density ..... Estimated Density

0 50 100 150 200 2500

0.002

0.004

0.006

0.008

0.01

0.012

0.014

0.016

t1= 53

t2=191

(e) (f)

FIGURE 26 – Result for 3-class MRA slice: (a)The MRA slice, (b) and (c) the outputof the pAIC-EM algorithm, (d) the dominant component generated by pAIC-EMand the refining components, positives and negatives, generated by the modifiedEM algorithm, (e) the empirical and estimated densities, and (f) the marginal den-sities with the best thresholds.

58

Page 80:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

( note (b) and (c) are pAIC-EM outputs). The two dominant modes represent the

darker lung area and its brighter background, respectively. Also, Fig. 25 shows

the 12 components of the final LCG (d), the empirical and estimated densities (e),

and the final LCG approximation of each class for the best separation threshold

t = 109 (f) ( note (d), (e), and (f) are mEM outputs). Fig. 26 shows a Magnetic Reso-

nance Angiography (MRA) slice (a), its empirical marginal grey levels distribution

approximated with the dominant normal mixture (b), and the log likelihood max-

imum at 3 (c) ( note (b) and (c) are pAIC-EM outputs). The three dominant modes

represent dark bones and fat, brain tissues, and bright blood vessels, respectively.

Also, Fig. 26 shows the 9 components of the final LCG (d), the empirical and es-

timated densities (e), and the final LCG approximation of each class for the best

separation thresholds t1 = 53, and t2 = 191 (f) ( note (d), (e), and (f) are mEM

outputs). For all experiments, the initial parameters are k = 10 Gaussians with θj

(µj = j∗[Q−1]/[k] and σ2j = 5), and πj = 1/k. The model component penalty N can

be easily selected to be greater than the increasing in the likelihood that happens

by adding a one Gaussian distribution to the model.

C. Graph Cuts-based Optimal Labeling

After the image models were presented, now the goal is to estimate the de-

sired map f by minimizing the energy function Eq. (45). The flow chart of the

complete algorithm is shown in Fig. 27. To minimize this energy, the input im-

age is initially labeled based on its gray levels probabilistic model described in

Sec.IV.B. Then the resulting labeled image is used as the best initialization to the

α-expansion move algorithm described in Sec. II.E.3. The α-expansion move al-

gorithm repeatedly minimizes the energy function Eq. (45), which is defined over

a finite set of labels by minimizing another version of this function with binary

variables using Max-flow/Min-cut method. In each iteration of α-expansion move

algorithm, the updated labeled image is used to update the MGRF potentials γ as

59

Page 81:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 27 – Proposed algorithm flowchart.

in Eq. (42). To minimize this binary version of the energy function, a weighted

undirected graph is created with vertices corresponding to the set of image pix-

els/voxels, P , and two special terminal vertices s (source, the new label “0”), and t

(sink, the current label “1”). The neighborhood systemN , is chosen to be the near-

est 4-neighborhood in the 2D case (or 6-neighborhood in the 3D case). Each edge

in the set of edges connecting the graph vertices is assigned a nonnegative weight

as follows. For each p, q ∈ P , and p, q ∈ N , the weights are shown in Table

(6). Then the optimal labeling is obtained by finding the minimum cost cut on this

60

Page 82:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

TABLE 6GRAPH EDGE WEIGHTS.

Edge Weight for

p, qγ fp 6= fq

0 fp = fq

s, p −ln[P (Ip | “1”)] p ∈ Pp, t −ln[P (Ip | “0”)] p ∈ P

graph. The minimum cost cut is computed in polynomial time for two terminal

graph cuts with positive edge weights via s/t Min-Cut/Max-Flow algorithm [53].

D. Experiments and Discussion

To assess the performance of the proposed approach, it is tested on several

N-D multimodal images. First, the advantage of the adaptive analytical approach

that is proposed to compute the spatial interaction parameter γ is highlighted. As

shown in Fig. 28, for a small value of γ the resultant labeled image will be noisy

(it emphasizes the data, the 2nd term in Eq. (45)). For a large value of γ the corre-

sponding labeled image is oversmoothed and some classes disappeared. For this

image, Fig. 29 shows the change of the relative error with γ. Also values of γ com-

puted with the proposed adaptive analytical approach are shown. These values

correspond to the range of γ that gives the minimum error; this emphasizes the

correctness of the proposed approach.

Validation: proposed approach is compared with both the mean shift algo-

rithm [66] 1 and the normalized cuts algorithm [6] 2. Note that when using these

codes, several trials have been conducted in order to select the tuning parameters

that give the best results. These parameters are (EDISON: spatial and color band-

widths hs and hc; Min. region M ). (NCUTS: number of segments ’nsg’, offset of

1The authors’ code EDISON is used. It is available at www.caip.rutgers.edu/riul/research/

code.html2The authors’ code NCUTS is used. It is available at www.cis.upenn.edu/ jshi/software/

61

Page 83:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b) (c)

FIGURE 28 – Effect of choosing γ (a) Original image (b) Noisy output γ = 0.05 (c)Over smoothed output γ = 5

10−2

10−1

100

101

0

10

20

30

40

50

60

70

γ

Rel

ativ

e E

rror

%

FIGURE 29 – The changes of the relative error with γ, proposed adaptive analyticalapproach values shown in red asterisk

62

Page 84:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

TABLE 71D STATISTICAL RESULTS.

σn 0.5 0.7 1.0 1.2 1.5

µa % 0.25 1.0 3.3 6.7 11.05

σa % 0.2 0.65 1.9 8.0 13.1

the symmetric similarity matrix ’of ’; elongation parameter for edge map ’eem’;

symmetric similarity matrix threshold ’th’; error tolerance in eigensolver ’et’).

1. Ground Truth Experiments

Ground truth experiments (on synthetic 1D, 2D and 3D images) are done

to statistically measure the proposed algorithm performance. Fig. 30 shows a 1D

image (signal) with 2 classes (1,-1) distorted by a White Gaussian Noise (WGN)

with standard deviation σn = 1. To illustrate that the standard MLE, which only

uses the proposed LCG model threshold without spatial relations, does not give

satisfactory restoration, Fig. 30 presents the MLE result as well as the proposed

approach result. To get some statistical analysis, the original signal is distorted

with different WGNs (with different σn). For each σn 50 noisy signals are gener-

ated. Average relative errors of proposed approach outputs in comparison to the

original signal are computed for this data set. Means µa and standard deviations

σa of the relative errors are shown in Table (7). The results in this table illustrate

the robustness to noise of the proposed algorithm .

2D Case: To assess the robustness of the proposed approach, it is tested

on 600 synthetic 2D multimodal images and compared the results with ground

truths. Examples of these images and proposed approach results are shown in Fig.

31. Relative errors ε of the results in comparison to ground truths and computation

times τ are also given. Fig. 32 shows EDISON outputs (hs=2,hc=1.5,M=8000.), and

NCUTS outputs (default parameters with nsg=3 and 5) for the same images.

To get some statistics, 10 3-modal data sets are generated, each of which

63

Page 85:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

−2

0

2

(a)

Orig

inal

−4−2

024

(b)

Noi

sy S

igna

l

−2

0

2

(c)

MLE

100 200 300 400 500 600 700 800 900−2

0

2

(d)

Our

’s

FIGURE 30 – 1D image labeling: (a)The original signal is distorted by WGN (b)Distorted signal (c) MLE outputs (d) Proposed algorithm output.

64

Page 86:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

consists of 30 images, with different Signal to Noise Ratios (SNR). The proposed

algorithm is applied on the data sets and computed the average relative error for

each data set. Fig. 33 shows the SNR and the corresponding average relative seg-

mentation error. The error at SNR -5 dB is dramatically large and the proposed

algorithm missed one of the objects due to the great amount of noise. The same

scenario is repeated for some 5-modal images (see Fig. 31(c)), and the correspond-

ing error versus SNR is plotted on Fig. 34. For a comparison purpose, the segmen-

tation error of the ICM Algorithm 1, EDISON, and NCUTS techniques are also

illustrated. It is worth mentioning that since the peaks in the image gray levels

histogram disappeared at low SNR, the experiments at low SNR are presented to

show the robustness of the framework given the number of classes.

Fig. 35 shows the results of the proposed algorithm and that of EDISON

(hs=8,hr=7.0,M=20) on a 4-modal image. In this case, one cannot compare with

NCUTS since this algorithm was designed so as separate segments take different

labels even if they share the same gray levels.

3D Case: To test the algorithm on 3D case, a bimodal volume phantom

(256x256x50) is generated. The relative error ε=0.53% between the proposed ap-

proach segmentation and the ground truth confirms the high accuracy of the pro-

posed segmentation framework in 3D cases. To emphasize the importance of seg-

menting the 3D object using a 3D graph (e.g., Fig. 36) instead of independently

segmenting each 2D slice of the volume (e.g, see Fig. 37), the two corresponding

segmentation results are illustrated in Fig. 38 (a) and (b), respectively.

2. Real Images and Applications

Figures 39, 40 and 41 show more segmentations of the proposed approach,

EDISON, and NCUTS of real images. In each output, the object boundary (shown

in red) is drawn, and the computation time is given. As shown in Fig. 40(d, e, and

f) the gray levels inhomogeneities of the starfish and its background lead to some

65

Page 87:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b)ε = 0.04%, τ = 1 sec.

(c) (d)ε = 0.41%, τ = 7 sec.

FIGURE 31 – Segmentation results (a) The gray levels image with SNR=1 dB, (b)Proposed Approach Segmentation.(c) The gray levels image with SNR=5 dB, (d)Proposed Approach Segmentation. (Error shown in red)

66

Page 88:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a)ε = 3.86%, τ = 26 sec. (b)ε = 1.48%, τ = 13 sec.

(c) ε = 2.24%, τ = 2 sec. (d)ε = 1.44%, τ = 18 sec.

FIGURE 32 – Segmentation results: for the 3-modal image(a) EDISON output, (b)NCUTS output . For the 5-modal image(c) EDISON output, (d) NCUTS output.(Error shown in red)

67

Page 89:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

−5 −4 −3 −2 −1 0 1 2 3 4 5 100

1

2

3

4

5

6

7

8

9

10

SNR (dB)

Rel

ativ

e E

rror

%

ProposedICMNCUTSEDISON

FIGURE 33 – The changes of the misclassification error with the SNR for 3-modaland 5-modal images

−4 −2 −1 0 1 2 3 4 5 10100

2

4

6

8

10

12

14

16

18

20

SNR (dB)

Rel

ativ

e E

rror

%

ProposedICMNCUTSEDISON

FIGURE 34 – The changes of the misclassification error with the SNR for 3-modaland 5-modal images

68

Page 90:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a)The input image (b)Proposed (c)EDISON

FIGURE 35 – 4-modal image segmentation results (b)ε = 0.16%, τ = 5 sec. (c)ε = 0.71%, τ = 13 sec.(errors shown in red)

FIGURE 36 – Example of a graph that used in volume labeling. Note: Terminalsshould be connected to all voxels but for illustration purposes, this was not done.

FIGURE 37 – Slices from the synthetic volume.

69

Page 91:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) 0.53% (b) 3.75%

FIGURE 38 – Synthetic volume segmentation results (a) 3D segmentation (b) 2Dsegmentation (Error shown in green).

errors. However, proposed algorithm’s result is more accurate than the others for

this specific case (Fig. 40(d)). This error can be overcome by using the high order

cliques model as shown in the next chapter.

a. Lung Segmentation Medical images are good examples of multimodal

images. In such a case, errors are evaluated with respect to ground truths pro-

duced by a radiologist. To assess the performance of the proposed framework

on practical problems, it is applied on lung segmentation problem [67]. Due to

the closeness of the gray levels between the abnormal tissues in the lung and the

chest tissues, interactive segmentation for the computed tomography (CT) lung

images is improper. In order to measure the accuracy of the proposed approach

on medical data, a geometric phantom is created with the same gray levels distri-

bution in regions as in the lung CT images at hand using the inverse mapping ap-

proach [1]. The error 0.26% between proposed algorithm results and ground truth

confirms the high accuracy of the proposed segmentation framework. For com-

parison, Fig. 42 shows the binary results obtained with the proposed technique, It-

erative Threshold (IT) [68] approach, and ICM [7] Algorithm 1. Also the proposed

70

Page 92:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a)τ = 1 sec. (b) τ = 1 Sec. (c)τ = 18 sec.

FIGURE 39 – Real image segmentation results of (a) Proposed Algorithm (b) EDI-SON (hs=15,hc=9,M=5000.) (c) NCUT (nsg=2, and the default parameters). (Cour-tesy of Shi and Malik [6])

(a)τ = 1 sec. (b) τ = 1 sec. (c) τ = 15 sec.

(d)τ = 1 sec. (e) τ = 6 sec. (f) τ = 8 sec.

FIGURE 40 – Starfish segmentation results of (a)Proposed Algorithm, (b) EDISON(hs=2, hc=10, M=500.), (c) NCUT (nsg=2, of=0.04), (d)Proposed Algorithm (e) EDI-SON (hs=7, hc=6.6, M=5000.)(f) NCUT (nsg=2, of=0.04, th=0.03, eem=5, et=0.1)

71

Page 93:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a)τ = 2 sec. (b) τ = 14 sec. (c) τ = 10 sec.

(d)τ = 1 sec. (e) τ = 10 sec. (f) τ = 19 sec.

FIGURE 41 – Kidney segmentation results of (a) Proposed Algorithm ε=1.7%(b)EDISON (hs=5, hc=6.1, M=9000.) ε=1.1% (c) NCUT (nsg=2, of=0.01, th=0.04,et=0.01) ε=1.1% (d) Proposed Algorithm ε=0.1%(e) EDISON (hs=7, hc=6.6,M=9000.) ε=0.01% (f) NCUT (nsg=2, and the default parameters) ε=0.6%

72

Page 94:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b) 0.26%

(c) 7.44% (d) 5.07%

FIGURE 42 – Ling Phantom Segmentation: (a) Phantom gray scale image, (b) theproposed algorithm output, (c) ICM output, (d) IT output. (The misclassified pix-els are shown in red color)

, ICM, and IT approaches were run on 60 axial human chest slices obtained by

spiral-scan low-dose computer tomography (LDCT), the 8-mm-thick LDCT slices

were reconstructed every 4 mm with the scanning pitch of 1.5 mm. The results

for three of them are shown in Fig. 43. In this experiment, the errors are evalu-

ated with respect to the ground truth produced by an expert (a radiologist). The

percentage error of the misclassified pixels is shown for each approach.

The proposed algorithm is extended to segment the whole lung volume si-

multaneously. This helps to overcome the large gray levels inhomogeneities in

lung data. The proposed approach is applied on seven human chest CT scans.

Some of these results are presented on Fig. 44. The main problem in the segmen-

tation of ICM and IT is that the misclassified voxels include abnormal lung tissues

(lung cancer), bronchi and bronchioles as shown in Fig. 45. These tissues are im-

portant if lung segmentation is a pre-step in a detection of lung nodules system.

73

Page 95:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a1) 1.5% (a2) 8.29% (a3) 2.54%

(b1) 1.56% (b2) 7.95% (b3) 2.27%

(c1) 4.36% (c2) 8% (c3) 8.52%

FIGURE 43 – Segmentation Results of: proposed algorithm (1st column) , ICM (2nd

column), and IT (last column). (The misclassified pixels are shown in red color)

74

Page 96:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

TABLE 8ACCURACY AND TIME PERFORMANCE OF THE PROPOSED APPROACH

SEGMENTATION ON 7 DATA SETS IN COMPARISON TO ICM AND IT.AVERAGE VOLUME 256X256X77

Algorithm

Proposed ICM IT

Minimum error, % 1.66 3.31 2.34

Maximum error, % 3.00 9.71 8.78

Mean error, % 2.29 7.08 6.14

Standard deviation,% 0.5 2.4 2.1

Significance, P 2 ∗ 10−4 5 ∗ 10−4

Average time, sec 46.26 55.31 7.06

The motivation behind the proposed segmentation approach is to exclude such er-

rors as far as possible. As expected, all misclassified pixels in the results of the

proposed algorithm are located at the boundary. The statistical analysis is shown

in Table (8). For comparison, the statistical analysis of ICM technique results, and

the IT approach results are also shown. The unpaired t-test is used to show that the

differences in the mean errors between the proposed segmentation, and (ICM/and

IT) are statistically significant (the two-tailed value P is less than 0.0006).

E. Conclusions

In this chapter, a novel approach [69, 70] was presented for automatic mul-

timodal gray scale image labeling using the graph cuts algorithm. A joint MGRF

model was used to describe the input image and its desired map with more ac-

curate model identification. The number of classes in the given image was de-

termined using a new technique [71] based on maximizing a new joint likelihood

function. The image gray levels distribution was precisely approximated by LCG

distributions with positive and negative components. Therefore, no user interac-

75

Page 97:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a)

(b)

(c)

(d)

FIGURE 44 – Proposed algorithm’s results. Left: segmented lung volumes (Errorare shown in green). Right: samples from volume’s segmented slices (Error areshown in red). Segmentation errors (a) 2.08%, (b) 2.21%, (c) 2.17%, and (d) 1.95%

76

Page 98:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 45 – Examples of segmented lung slices that have nodules (bounded byyellow circle). Left: IT and middle: ICM approaches misclassified these parts aschest tissues (error is shown in red). However, right: The proposed algorithmcorrectly classified them as a lung.

tion was needed; the image was initially segmented using this LCG model. Finally,

an energy function using the previous models was formulated, and was globally

minimized using graph cuts. Experimental results of synthetic and real gray scale

multimodal images clarified that without optimizing any tuning parameters, the

proposed approach was fast, robust to noise, and gave accurate results compared

to the state-of-the-art algorithms (e.g., [6, 66]). Moreover, the proposed approach

was easily extended to segment 3D volumes.

77

Page 99:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

CHAPTER V

OPTIMIZING BINARY MRFs WITH HIGHER ORDER CLIQUES

Due to the explosion of efficient and successful pairwise MRFs solvers in

computer vision, previous chapters focus on the pairwise MRF model. However, a

question is still raised: does any link exist between the pairwise and higher order

MRFs such that the like solutions can be applied to the latter models? This chapter

explores such a link for binary MRFs that allows one to represent Gibbs energy

of signal interaction with a polynomial function. First a new algorithm that con-

verts higher order energy that represents high order MRFs to a polynomial func-

tion is presented. Then energy minimization tools for the pairwise MRF models

can be easily applied to the higher order counterparts. The proposed framework

demonstrates very promising experimental results of image segmentation and can

be used to solve other computer vision problems.

A. Introduction

Recently, as explained in Sec. II.D , discrete optimizers (e.g., graph cuts,

BP, and TRW) became essential tools in the computer vision field. These tools are

used to solve many computer vision problems. Where, the framework of such

problems is justified in terms of maximum a posteriori configurations in a MRF,

and the MAP-MRF problem is formulated as a minimization of an energy function.

However, this chapter focuses only on binary MRFs that play an important role

in computer vision since Boykov et al. [44] proposed an approximate graph-cut

algorithm for energy minimization with iterative expansion moves. As explained

in Algorithm 2 Sec.II.E.3, this algorithm reduces the problem with multivalued

variables to a sequence of subproblems with binary variables.

78

Page 100:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

Most of the energy-based computer vision frameworks represent the MRF

energy on an image lattice in terms of unary and pairwise clique potentials. How-

ever, this representation is insufficient for modeling rich statistics of natural scenes

[34]. The latter require higher order clique potentials being capable to describe

complex interactions between variables. Adding potentials for the higher order

cliques could improve the image model [72, 73]. However, optimization algo-

rithms of these models have too high time complexity to be practicable. For ex-

ample, a conventional approximate energy minimization framework with belief

propagation (BP) is too computationally expensive for MRFs with higher order

cliques, and Lan et al. [34] proposed approximations to make BP practical in these

cases. However, the results are competitive with only simple local optimization

based on gradient descent technique. Recently, Kohli et al. [74] proposed a gen-

eralized Pn family of clique potentials for the Potts MRF model and showed that

optimal graph-cut moves for the family have polynomial time complexity. How-

ever, just as in the standard graph-cut approaches based on the α-expansion or

αβ-swap, the energy terms for this family have to be submodular.

Instead of developing efficient energy minimization techniques for higher

order MRFs, this work chooses an alternative strategy of reusing well established

approaches that have been successful for the pairwise models and proposes an effi-

cient transformation of an energy function for a higher order MRF into a quadratic

function. First, the potential energy for higher order cliques is converted into a

polynomial form, an algebraic proof explaining when this form can be graph rep-

resentable is introduced, the graph reconstruction for such an energy is explicitly

shown. Then the higher-order polynomial is reduced to a specific quadratic one.

The latter may have submodular and/or nonsubmodular terms, and a few ap-

proaches have been proposed to minimize such functions. For instance, Rother

et al. [75] truncate nonsubmodular terms in order to obtain an approximate sub-

modular function to be minimized. This truncation leads to a reasonable solution

when the number of the nonsubmodular terms is small. As discussed in Sec.II.F.1,

79

Page 101:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

recently Rother et al. [9] proposed an efficient optimization algorithm for nonsub-

modular binary MRFs, called the extended roof duality. However, it is limited to

only quadratic energy functions. The proposed work expands notably the class

of the nonsubmodular MRFs that can be minimized using this algorithm. In this

chapter, extended roof duality is used to minimize the proposed quadratic version

of the higher order energy. To illustrate potentialities of the higher order MRFs in

modeling complex scenes, the performance of the proposed approach has been as-

sessed experimentally in application to image segmentation. The obtained results

confirm that the proposed optimized MRF framework can be efficiently used in

practice.

B. Preliminaries

Recall, the goal image labeling f in the MAP approach is a realization of a

Markov-Gibbs random field (MGRF) F defined over an arithmetic 2D lattice P =

1, 2, · · · , n with a neighborhood system N . Energy functions for an MGRF with

only unary and pairwise cliques can be written in the following form:

E(f) =∑p∈P

D(fp) +∑

p,q∈NV (fp, fq). (54)

The unary terms D(.) encode the data penalty function, and the pairwise terms

V (., .) are interaction potentials. For simplicity, in this chapter both unary and

pairwise terms will be represented as function V (.). So the energy function has the

following from:

E(f) =∑p∈P

V (fp) +∑

p,q∈NV (fp, fq), . (55)

The energy minimum E(f∗) = minf E(f) corresponds to the MAP labeling f∗. For

a binary MGRF, the set of labels consists of two values, L = 0, 1, each variable fp

is a binary variable, and the energy function Eq. (55) can be written in a quadratic

polynomial form:

E(f) = a0 +∑p∈P

apfp +∑

p,q∈Napqfpfq, (56)

80

Page 102:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

where, a0, ap and apq are real numbers depending on V (0), V (1), . . . , V (1, 1) in a

straightforward way.

Generally, let Ln = (f1, f2, · · · , fn)| fp ∈ L ∀ p = 1, · · · , n, and let

Ek(f) = Ek(f1, f2, · · · , fn) be a real valued polynomial function of n bivalent vari-

ables and real coefficients and defining a Gibbs energy with higher order poten-

tials (in contrast to the above quadratic function E). Such function Ek(f) is called

a pseudo-Boolean function [76] and can be uniquely represented as a multi-linear

polynomial [54] as follows:

Ek(f) =∑S⊆P

aS∏p∈S

fp, (57)

where aS are non-zero real numbers, and the product over the empty set is 1 by

definition.

C. Polynomial Forms of Clique Potentials

To be transformed into a quadratic energy, the higher order energy function

should be represented in a multi-linear polynomial form Eq. (57). This section

considers how the clique potentials can be represented in a polynomial form. An

unary term has an obvious polynomial form:

Vfp = V (fp) = (V1 − V0)fp + V0, (58)

where V1 and V0, are the potential values for the labels 1 and 0 for the variable

fp ∈ L.

1. Cliques Of Size Two

Let fp, fq ∈ L, and let c0, c1, c2 and c3 be real coefficients. A clique of size two

has a potential function V (fp, fq) that can be generally represented as follows:

Vfpfq = V (fp, fq) = (c0fp + c1)(c2fq + c3)

= c0c2fpfq + c1c2fq + c0c3fp + c1c3.

81

Page 103:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

So, it is easy to show that

V00 = c1c3, V01 = c1c2 + V00, V10 = c0c3 + V00, and

c0c2 = V11 − (c1c2 + c0c3 + c1c3) = V11 + V00 − V01 − V10.

This implies

Vfpfq = (V11 + V00 − V01 − V10)fpfq + (V01 − V00)fq

+ (V10 − V00)fp + V00. (59)

Indeed, representing pairwise potential in the polynomial form Eq. (59)

has many advantages. It implies an algebraic proof of the Kolmogorov–Zabih’s

submodularity condition [45] similar to the combinatorial optimization theorem

(its version and proof are given in [77]):

Theorem 2. [Freedman–Drineas; 2005] The quadratic pseudo-Boolean function Eq.

(56) can be minimized via graph cut techniques if and only if apq ≤ 0 ∀p, q ∈ N .

According to this theorem, quadratic polynomial function Eq. (59) can be

minimized via graph cuts if and only if

V11 + V00 − V01 − V10 ≤ 0. (60)

This follows the definition of Kolmogorov and Zabih [45]:

Definition 5. A function of one binary variable is always submodular. A function V (fp, fq)

from the family F2 is submodular if and only if: V11 + V00 ≤ V01 + V10 .

The representation in Eq. (59) explicitly shows edges in the graph related to

the quadratic term. This construction is similar to what has been introduced in [45]

but here each term in Eq. (59), excluding the constant, directly represents a part of

the graph.

2. Cliques Of Size k

A clique of size k has a potential function V (fp, fq, · · · , fk), where k ≤ n. Just

as before, coefficients of the polynomial form can be estimated using Algorithm 6.

82

Page 104:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

It is worth mentioning that Freedman and Drineas [77] proposed a mathematical

formula that can be used to compute these coefficients. However, it is complicated,

and the proposed algorithm is much easier for implementation.

Algorithm 6 Generating Coefficients of Size k Clique’s Energy Algorithm1: Coef(S,Z,W)

2: H = 1, 2, · · · , k3: if (S = φ) then

4: Return V (fH = 0)

5: end if

6: if (|S| = 1) then

7: if (Z = φ) then

8: Return V (fS = 1, fH−S = 0)− V (0)

9: else

10: Return V (fS = 1, fZ = W , fH−S+Z = 0)− V (fZ = W , fH−Z = 0)

11: end if

12: else

13: Select p ∈ S . Then: S = S − p, Z = Z + p, W1 = W + 1, and

W0 = W + 014: Coef(S,Z,W1)− Coef(S,Z,W0)

15: end if

As an example, Algorithm 6 is used to estimate the polynomial coefficients

83

Page 105:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

of the potential Vpqr = V (fp, fq, fr); fp, fq, fr ∈ L, for a clique of size 3. Then

Vpqr = Coef(p, q, r, φ, φ)fp, fq, fr

+ Coef(p, q, φ, φ)fp, fq

+ Coef(p, r, φ, φ)fp, fr

+ Coef(q, r, φ, φ)fq, fr

+ Coef(p, φ, φ)fp + Coef(q, φ, φ)fq

+ Coef(r, φ, φ)fr + Coef(φ, φ, φ). (61)

Where the coefficients are computed using Algorithm 6 such as:

Coef(φ, φ, φ) = V000.

Coef(p, φ, φ) = V100 − V000.

Coef(p, q, φ, φ)

+Coef(q, p, 1) = V110 − V100

−Coef(q, p, 0) = V000 − V010

.

Coef(p, q, r, φ, φ)

+Coef(q, r, p, 1)−Coef(q, r, p, 0)

,

where

+Coef(q, r, p, 1)

+Coef(r, p, q, 1, 1) = V111 − V110

−Coef(r, p, q, 1, 0) = V100 − V101

−Coef(q, r, p, 0)

−Coef(r, p, q, 0, 1) = V010 − V011

+Coef(r, p, q, 0, 0) = V001 − V000

.

84

Page 106:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

Finally, the potential is as follows:

Vpqr = ((V111 + V100 − V110 − V101)

− (V011 + V000 − V001 − V010))fpfqfr

+ (V011 + V000 − V001 − V010)fqfr

+ (V101 + V000 − V001 − V100)fpfr

+ (V110 + V000 − V100 − V010)fpfq

+ (V010 − V000)fq + (V100 − V000)fp

+ (V001 − V000)fr + V000. (62)

As will be explained in Sec.V.D, the first (cubic) term will be reduced to be a

quadratic term with the same coefficient. Thus, and according to Theorem (2),

such energy can be minimized via graph cuts if and only if

(V111 + V100 − V110 − V101) −

(V011 + V000 − V001 − V010) ≤ 0, (63)

V011 + V000 − V001 − V010 ≤ 0, (64)

V101 + V000 − V001 − V100 ≤ 0, and (65)

V110 + V000 − V100 − V010 ≤ 0 . (66)

These inequalities represent all the projections of the Vpqr on 2 variables. This fol-

lows the following definition [45]:

Definition 6. A function from the family Fk is submodular if and only if: all its projec-

tions on 2 variables are submodular.

Moreover, (and similar to the F2 case), Eq. (62) explicitly shows how the

cubic term can be represented in the graph construction. This polynomial repre-

sentation can be easily expanded to the general case of Fk when Algorithm 6 is

used to compute the coefficients.

85

Page 107:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

D. Energy Reduction - the Proposed Approach

The previous section showed so far how the Gibbs energy can be repre-

sented in the polynomial form Eq. (57) for any clique size. This section dis-

cusses minimization of such energies. Quite successful optimization techniques

for graphical models have been proposed to minimize quadratic energies with

submodular terms (e.g., [44]) and nonsubmodular terms (e.g., [9]). This section

proposes how to exploit these algorithms for minimizing higher order energies

by transforming the latter to the quadratic ones. This can be done by adding a

number of dummy variables, each such variable substituting the product of the

two initial variables as shown in the following algorithm. Different theoretical

works [54, 76] considered the problem of reducing an optimization of the gen-

eral pseudo-Boolean function Ek(f) in polynomial time to the optimization of a

quadratic pseudo-Boolean function. In contrast to [76], the proposed algorithm

guarantees that the goal quadratic pseudo-Boolean function has the same mini-

mum at the same variables as the initial general pseudo-Boolean function. Also,

as distinct from what has been proposed in [54], both a detailed proof to verify

the proposed algorithm and an efficient implementation of the latter on a related

graph are proposed.

The quadratic pseudo-Boolean function E(.) is obtained from the pseudo-

Boolean function Ek(.) as follows: replace the occurrence of fpfq in Eq. (57) by

fn+1 and add the term B · (fpfq + 3fn+1 − 2fpfn+1 − 2fqfn+1) to E(.). This gives the

following function E(f1, f2, · · · , fn+1)

E(f) = B · (fpfq + (3− 2fp − 2fq)fn+1) +∑S∗⊆P

aS∗∏p∈S∗

fp, (67)

where

S∗ =

(S − p, q) ∪ n + 1 p, q ⊆ SS p, q * S

.

86

Page 108:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

To compute the constant B, Eq. (57) is rewritten first as follows:

Ek(f) = a0 +∑S1⊆P

a−S1

∏p∈S1

fp +∑S2⊆P

a+S2

∏p∈S2

fp, (68)

where a0 is the absolute term, a−S1’s are the negative coefficients, and a+

S2’s are the

positive coefficients. Then, let A = a0 +∑

S1⊆P a−S1be the sum of all negative

coefficients in Eq. (68) plus the absolute term. Note that

A ≤ minf∈Ln

Ek(f). (69)

Also, denote u a real number being greater than the minimal value of Ek(f) on Ln:

u > minf∈Ln

Ek(f). (70)

Practically, u can be any number being greater than a particular value of Ek(f) on

Ln. Finally, the chosen value B has to satisfy the relationship B ≥ u− A.

This replacement is repeated until one gets a quadratic pseudo-Boolean

function. Algorithm 7 shows shows these steps in detail.

At each step, E(f1, f2, · · · , fn+1) must satisfy the following

Lemma 1. Let MEk = y ∈ Ln|Ek(y) = minf∈Ln Ek(f) be a set of all y ∈ Ln such

that Ek(y) is the global minimum of the function Ek on Ln. Then

1. E(f1, f2, · · · , fn+1) = Ek(f1, f2, · · · , fn),

2. (y1, y2, · · · , yn+1) ∈ME iff (y1, y2, · · · , yn) ∈MEk .

Proof 1. For x, y, z ∈ L, it is easy to notice that the function Ω(x, y, z) = xy + 3z −2xz − 2yz can be defined as follows:

Ω(x, y, z) =

0 xy = z

3 x = y = 0, z = 1

1 otherwise

.

87

Page 109:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

Algorithm 7 Transform to QuadraticRequire: general pseudo-Boolean function Ek(f) Eq. ( 57).

set A = a0 +∑

S1⊆P a−S1, set u > minf∈Ln Ek(f) (e.g., u = Ek(0) + 1), and set

B ≥ u− A

2: while (∃S ‘ ⊆ P and |S ‘| > 2) do

Select a pair p, q ⊆ S ‘ and update the coefficients

ap,q = ap,q + B

ap,n+1 = aq,n+1 = −2B

an+1 = 3B

a(S−p,q)∪n+1 = aS , set aS = 0 ∀S ⊇ p, q

4: n = n + 1, update the function as in Eq. (67)

end while

Ensure: The quadratic pseudo-Boolean function E(f)

If fpfq = fn+1 , then

E(f) = B ·Ω(fp, fq, fn+1) +∑S∗⊆P

aS∗∏p∈S∗

fp

= 0 +∑S∗⊆P

aS∗∏p∈S∗

fp

=∑S⊆P

aS∏p∈S

fp = Ek(f).

More specifically, E(f) has the same minimum value as Ek(f) on Ln. On the other hand,

assume ypyq 6= yn+1, which implies that Ω(yp, yq, yn+1) > 1. Also, assume (y1, y2, · · · , yn+1) ∈ME . Then

E(y) = B ·Ω(yp, yq, yn+1) +∑S∗⊆P

aS∗∏p∈S∗

yp.

88

Page 110:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

From equations (69,70) one can conclude that B > 0. So

E(y) > B +∑S∗⊆P

aS∗∏p∈S∗

yp > B + A.

The choice of B > u− A, guarantees that

E(y) > u.

This contradicts the assumption (y1, y2, · · · , yn+1) ∈ME . This proves (y1, y2, · · · , yn+1) 6∈ME if ypyq 6= yn+1 and the lemma follows.

By repeatedly applying the construction in Lemma (1), one gets the follow-

ing theorem (different versions of this theorem can be found in [54, 76]):

Theorem 3. Given a general pseudo-Boolean function Ek(f1, f2, · · · , fn), there exists at

most a quadratic pseudo-Boolean function E(f1, f2, · · · , fn+m) where m ≥ 0 such that

1. (y1, y2, · · · , yn+m) ∈ME ⇐⇒ (y1, y2, · · · , yn) ∈MEk

2. The size of the quadratic pseudo-Boolean function is polynomially bounded in size of

Ek, and so the reduction algorithm will terminate at polynomial time.

Proof 2. Repeated application of the construction in the proof of Lemma (1) yields to the

point 1 of the theorem.

To prove point 2: Define by M3 the number of terms that have |S| > 3 (i.e., more

than 2 variables, higher order terms) in the function Ek(f1, f2, · · · , fn).3 In the loop of the

Algorithm 7, notice the following:3Note that a function Ek of n binary variables contains at most 2n terms. This can be computed

by summing numbers of terms that have 0 up to n variables. n

0

+

n

1

+ · · ·+

n

n

= 2n

Also, it is easy to show that the function E contains at most 2n − n2+n+22 terms that have |S| > 2

(i.e., more than 2 variables).

89

Page 111:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

• The term of size n (i.e., |S| = n) needs at most n− 2 iterations,

• Also, at each iteration in this loop, at least one of the terms which have |S| > 2 will

decreases in size.

Hence the algorithm must terminate in at most a number of iterations ≪ M3(n− 2). The

less sign is presented because the average number of iterations for each term will be less

than n − 2. Indeed, larger number of variables contained by each energy term indicates

that these terms share several common variables, and so they will be reduced concurrently.

As an example: a function with 10 variables contains at most 968 terms with |S| > 2.

Using Algorithm 7, it is reduced with a number of iterations equals to 68 ≪ 968 × 8.

This proves the claim about complexity.

1. Efficient Implementation

The number of dummy variables in the generated quadratic pseudo-Boolean

function depends on the selection of the pairs p, q in the loop of the Algorithm 7.

Finding the optimal selection to minimize this number is an NP-hard problem [54].

Also, searching for this pair in other terms will be exhaustive. However, in most

computer vision problems, one deals with images on an arithmetic 2D lattice Pwith n pixels. The order of the Gibbs energy function to be minimized depends on

the particular neighborhood system and the maximal clique size. The prior knowl-

edge about the neighborhood system and the clique size can be used to minimize

the number of dummy variables and to eliminate the search for the repeated pair

in other terms. This process is demonstrated on the second order neighborhood

system and the cliques of the size 3 (see Fig. 5), but it can be generalized for the

higher orders. Figure 46 suggests that the second order neighborhood system con-

tains four different cliques of the size 3. Thus, the cubic terms that correspond to

the cliques of the size 3 can be converted, to quadratic terms as follows:

• At each pixel (i, j) select the cubic term that corresponds to clique γ8

90

Page 112:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 46 – Part of an image lattice for 2nd order neighborhood system and cliquesof size three

• Reduce this term and the the cubic term of the clique γ6 at pixel (i− 1, j − 1)

if possible, by eliminating variables (i− 1, j), and (i, j − 1)

• For pixel (i, j) select the cubic term that corresponds to the clique γ5

• Reduce this term nd the the cubic term of the clique γ7 at pixel (i− 1, j + 1) if

possible, by eliminating variables (i− 1, j), and (i, j + 1)

After a single scan for the image, all the cubic terms will be converted to the

quadratic terms, and every term will be visited only once. To illustrate the en-

hancement introduced by the proposed implementation, as an example: the linear

search in a list runs in O(n) : n is the number of elements. Hence, an image of

size R×C has 4(R− 1)(C − 1) triple cliques in second order neighborhood system

window. Each triple clique has 4 terms with |S| > 1 with total 9 elements as shown

in Eq. (62). So applying Algorithm 7 directly without proposed implementation

has an overhead O(36(R− 1)(C − 1))

Notice that this scenario is not unique. Many other scenarios can be chosen

for image scanning and selection of pairs of higher order cliques to be reduced.

However, in the efficient scenario every higher order term must be converted to a

quadratic term after being visited only once.

91

Page 113:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

E. Experimental Results

To illustrate the potential of higher order cliques in modelling complex ob-

jects and assess the performance of the proposed algorithm, image segmentation

into two classes (object and background) is considered. As described in Sec.II.C

the MAP estimate of f , given the input image, is equivalent to minimize an energy

function of the form (57). Where, the set of labels is 0 ≡ “BCK”, 1 ≡ “OBJ”and each pixel’s label represents a variable in this energy. So, one has an n binary

variables energy function. The unary term in this energy function V (fp) is chosen

to be:

V (fp) = ||Ip − Ifp||2, (71)

where, Ip is the features vector at the pixel p, e.g. a 4D vector Ip = (ILi, Iai, Ibi, Iti)

[78], where the first three components are the pixel-wise color L*a*b* components

and and Iti is a local texture descriptor [79]. Seeds selected from the input image

can be used to estimate feature vectors for the object, I1, and background, I0. Using

feature vectors I1 and I0, an initial binary map can be estimated. The pairwise and

third order cliques’ potentials are analytically estimated from the initial map using

the proposed methods described in Sec.III.B and Sec.III.B.2, respectively.

In all experiments, the second order neighborhood system is selected sizes

from 1 to 3. By defining the cliques’ potentials (unary, pairwise, and third order),

one identifies the target segmentation’s energy that needs to be minimized. After

that, Algorithm 6 is used to compute the coefficients of the polynomial that repre-

sents the segmentation’s energy. Then Algorithm 7 generates a quadratic version

of this polynomial. Finally, the extended roof duality optimization algorithm [9],

discussed in Sec.II.F solves the quadratic pseudo-Boolean function. In the exper-

iments that follow, images are segmented twice: first, with unary and pairwise

cliques, and then with unary and third order cliques in the MGRF model. Of

course, cliques of greater sizes can be more efficient for describing complex re-

gions. The third order is used for illustration purposes only.

92

Page 114:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b)

FIGURE 47 – Starfish segmentation results. (a) the pairwise cliques results, and (b):the higher order cliques results.

Fig. 47 shows the segmentation results of a starfish. As shown in the re-

sults, unlike pairwise interaction Fig. 47(a), the high order interaction Fig. 47(b)

overcomes the intensity inhomogeneities of the starfish and its background. For

more challenging situations, some parts are occluded from the starfish Figures 48

and 49. Also, the high order interaction (see b and d) successes to get the correct

boundary of the starfish, however, pairwise interaction (see a and c) could not.

The average execution time for this experiment: in the higher order case is 6sec.,

in correspondence to 2sec. in the pairwise case.

More segmentation results for different colored objects are shown in Fig-

ures 50, 51, and 52. These images are from the Berkeley Segmentation Dataset [80].

As shown in Fig. 50 (a,c), numbers refer to regions that contain inhomogeneities

where the pairwise interaction fails. However, as expected, the high order interac-

tion overcomes them (see b,d). More results are illustrated in Fig. 51. In Fig. 52

some artificial occlusions are made, by letting some object regions take the back-

ground color. The results illustrate that the high order interaction still can get the

correct segmentations.

93

Page 115:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b)

(c) (d)

FIGURE 48 – Starfish with occlusions segmentation results. (a,c): the pairwisecliques results, and (b,d): the higher order cliques results.

94

Page 116:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b)

(c) (d)

FIGURE 49 – Starfish with occlusions more segmentation results. (a,c): the pair-wise cliques results, and (b,d): the higher order cliques results.

95

Page 117:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b)

(c) (d)

FIGURE 50 – More segmentation results. (a,c): the pairwise cliques results (Num-bers in images refer to the regions with inhomogeneities), and (b,d): the higherorder cliques results.

96

Page 118:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b)

1

2

(c) (d)

(e) (f)

FIGURE 51 – More segmentation results. (a,c,e): the pairwise cliques results (Num-bers in images refer to the regions with inhomogeneities), and (b,d,f): the higherorder cliques results.

97

Page 119:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b)

(c) (d)

FIGURE 52 – More segmentation results for partially occluded objects. (a,c): thepairwise cliques results (Numbers in images refer to the regions represent artificialocclusion), and (b,d): the higher order cliques results.

98

Page 120:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

F. Conclusions

This chapter introduced an efficient link between the binary MGRF mod-

els with higher order and pairwise cliques. It proposed an algorithm [81] that

can transform a general pseudo-Boolean function into a quadratic pseudo-Boolean

function and provably guarantees the obtained quadratic function has the same

minimum at the same variables as the initial higher order one. The algorithm

was efficiently implemented for image-related graphical models. Thus, one can

apply the well known pairwise MGRFs solvers to the higher order MGRFs. The

MGRF parameters were analytically estimated. Experimental results showed the

proposed framework notably improved image segmentation and therefore may be

useful for solving many other computer vision problems.

99

Page 121:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

CHAPTER VI

A NOVEL SHAPE REPRESENTATION AND APPLICATION FOR IMAGESEGMENTATION

This chapter proposes a novel segmentation approach based on the graph

cuts technique with shape constraints. The segmentation approach depends on

both image appearance and shape information. Shape information is gathered

from a set of training shapes. Then the shape variations are estimated using a new

distance probabilistic model. This model approximates the marginal densities of

the object and its background in the variability region using a Poisson distribu-

tion refined by positive and negative Gaussian components. To segment an ob-

ject in the given image, first it is aligned with the training images so one can use

the distance probabilistic model. As discussed in Sec.IV.B, the object gray level is

approximated with a linear combination of Gaussian distributions with positive

and negative components. The spatial interaction between the neighboring pix-

els is identified using the new analytical approach introduced in Sec.III.B. Finally,

a new energy function is formulated using both image appearance models and

shape constraints. This function is globally minimized using s/t graph cuts to get

the optimal segmentation. Experiments show that the proposed technique gives

promising results compared to others without shape constraints.

A. Introduction

Segmentation is a fundamental problem in image processing. There are

many simple techniques, such as region growing or thresholding, for image seg-

mentation. Although these techniques are widely known due to their simplic-

ity and speed, no accurate segmentation can be achieved using these techniques

100

Page 122:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

because these techniques depend only on the marginal probability distributions,

and in most cases signal ranges for different objects overlap. To overcome this

problem, many methods try to exploit the spatial interaction between segments as

well as the regional properties of segments. Also parametric deformable models

(e.g. [82]) and geometrical deformable models (level sets e.g. [83]) are powerful

methods and have been used widely for the segmentation problems. However, all

these methods tend to fail in the case of noise, gray level inhomogeneities, diffused

boundaries or occluded shapes, and they don’t take advantage of the a priori mod-

els. Therefore segmentation algorithms can not depend only on image information

but also have to exploit the prior knowledge of shapes and other properties of the

structures to be segmented.

Leventon et al. [35] combine the shape and deformable model by attracting

the level set function to the likely shapes from a training set specified by princi-

pal component analysis (PCA). Huang et. al. [37], combine registration with seg-

mentation in an energy minimization problem. The evolving curve is registered

iteratively with a shape model using the level sets. They minimized a certain func-

tion to estimate the transformation parameters. Unfortunately, this approach may

stuck in a local minimum and its coefficients still exist to be tuned. In [38], shapes

are represented with a linear combination of 2D distance maps where the weight

estimates maximize the distance between the mean gray values inside and outside

the shape. In [39] a shape prior and its variance obtained from training data are

used to define a Gaussian distribution, which is then used in the external energy

component of a level sets framework. To make the shape guides the segmentation

process, Chen et al. [36] defined an energy functional which basically minimizes

an Euclidean distance between a given point and its shape prior.

In this chapter, a new segmentation approach is proposed. This approach

uses graph cuts to combine region and boundary properties of segments as well

as shape constraints. From a set of aligned images an image consisting of three

segments (common object, common background, and shape variability region) is

101

Page 123:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

generated. The shape variations are modelled using a new distance probabilistic

model. This distance model approximates the distance marginal densities of the

object and its background inside the variability region using a Poisson distribution

refined by positive and negative Gaussian components. For each given image, to

use the distance probabilistic model, the given image is aligned with the training

images. Then its gray level is approximated using an LCG model with positive

and negative components. Finally, a new energy function is globally minimized

using s/t graph cuts to get the optimal segmentation. This function is formulated

such that it combines region and boundary properties, and the shape information.

B. Proposed Segmentation Framework

In this chapter, the goal is to find the optimal segmentation, best labelling

f , by minimizing a new energy function which combines region and boundary

properties of segments as well as shape constraints. Image appearance models are

discussed in sections III.B and IV.B. In this section the shape model is explained.

1. Shape Model Construction

A shape model of an object is created from a training set of images for that

object. Fig. 53 illustrates the steps that used to create a human kidney shape model

from human kidney Dynamic Contrast Enhanced Magnetic Resonance Imaging

(DCE-MRI) slices. Fig. 53(a) shows a sample of the DCE-MRI kidney slices. First,

the kidneys are manually segmented (by a radiologist), as shown in Fig. 53(b).

Then the segmented kidneys are aligned using 2D rigid registration [84], see Fig.

53(c). The aligned images are converted to binary images, as shown in Fig. 53(d).

Finally, a labelled image “shape image” Ps = K ∪ B ∪ X is generated as shown in

Fig. 54(a). The white color representsK (kidney), black representsR (background),

and gray is the variability region X . To model the shape variations, variability

region X , a distance probabilistic model is used. The distance probabilistic model

102

Page 124:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a)

(b)

(c)

(d)

FIGURE 53 – Samples of kidney training data images: (a) original , (b) Segmented, (c) Aligned (d) Binary

describes the object (and background) in the variability region as a function of the

normal distance dp from a pixel p ∈ X to the kidney/variability contour CKX .

dp = minc∈CKX

‖p− c‖. (72)

Each set of pixels located at equal distance dp from CKX constitutes an iso-contour

Cdp for CKX as shown in Fig. 54(b) (To clarify the iso-contours, the variability

region is enlarged without relative scale to object). The kidney distance histogram

is estimated as follows. The histogram entity at distance dp is defined as

hdp =Mt∑i=1

∑p∈Cdp

δ(p ∈ Ki), (73)

103

Page 125:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a) (b)

FIGURE 54 – (a) The labelled image, (b) The iso-contours

where the indicator function δ(A) equals 1 when the condition A is true, and zero

otherwise, Mt is the number of training images, and Ki is the kidney region in the

ith training image. The distance dp is changed until the whole distance domain

available in the variability region is covered. Then the histogram is multiplied by

kidney prior value which is defined as follows:

πK =1

Mt | X |Mt∑i=1

∑p∈K

δ(p ∈ Ki). (74)

Since each iso-contour Cdp is a normally propagated wave from CKX . A reason-

able assumption is that the probability of an iso-contour Cdp to be object decays

exponentially as dp increased. To estimate the marginal density of the kidney, for a

discrete index dp a Poisson distribution to the object the distance histogram can be

fitted. The set of pixels belong to the iso-contour Cdp will obey a Poisson process.

The same scenario is repeated to get the marginal density of the background. The

kidney and background distance empirical densities and the estimated Poisson

distributions are shown in Fig. 55 (a) and (b), respectively.

104

Page 126:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

2. Distance Probabilistic Model

The distance marginal density of each class P (dp | fp) is estimated as fol-

lows. Since each class fp (object or background) does not follow a perfect Poisson

distribution, there will be a deviation between the estimated and the empirical

densities. This deviation is modelled by a linear combination of Gaussians with

positive and negative components. So the distance marginal density of each class

consists of a Poisson distribution and a LCG with C+fp

positive and C−fp

negative

components as follows:

P (dp|fp) = ϑ(dp|ξfp) +

C+fp∑

r=1

w+fp,rϕ(dp|θ+

fp,r)−C−fp∑

l=1

w−fp,lϕ(dp|θ−fp,l), (75)

where ϑ(dp|ξfp) is a Poisson density with rate ξ. The Poisson distribution parameter

is estimated using the maximum likelihood estimator. As shown in Fig. 55: (c)

and (d) illustrate the probabilistic models components for object and background

respectively. The empirical and the final estimated densities are shown in Fig. 55

(e) for the kidney and (f) for the background.

3. Graph Cuts-based Optimal Segmentation

Define by d the set of distances of pixels in the variability region (shape

information). Due to the independence of I and d, a probability model of the shape

constraints, input image and its desired map is given by a conditional distribution:

P (f |I,d) = P (f)P (I|f)P (d|f). (76)

Similar to what was explained in Sec.II.C.1, MAP estimate of f is equivalent to

minimizing the following function:

E(f) =∑

p,q∈NV (fp, fq)−

∑p∈P

log(P (Ip | fp))−∑p∈P

log(P (dp | fp)), (77)

where V (fp, fq) represents the penalty for the discontinuity between pixels p and

q. This model is discussed in Sec.III.B. The second term measures how much

105

Page 127:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

5 10 15 200

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Distance

Estimated PoissonEmpirical Density

5 10 15 200

0.05

0.1

0.15

0.2

0.25 Poisson DistributionEmpirical Density

(a) Kidney (b) Background

5 10 15 20−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Poisson DistributionGaussian ComponentGaussian Component

5 10 15 20−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

0.25

0.3

Poisson DistributionGaussian ComponentGaussian Component

(c) Kidney (d) Background

5 10 15 200

0.05

0.1

0.15

0.2

0.25

Estimated DensityEmpirical Density

5 10 15 200

0.02

0.04

0.06

0.08

0.1

0.12

0.14

Estimated DensityEmpirical Density

(e) Kidney (f) Background

FIGURE 55 – (a,b)Empirical densities and the estimated Poisson distributions, (c,d)Components of distance probabilistic models, (e,f) Final estimated densities

106

Page 128:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

TABLE 9GRAPH EDGES WEIGHTS.

Edge Weight for

p, q V (fp, fq) p, q ∈ N

s, p− log[P (Ip | “1”) ∗ P (dp | “1”)] p ∈ X

∞ p ∈ K0 p ∈ R

p, t− log[P (Ip | “0”) ∗ P (dp | “0”)] p ∈ X

0 p ∈ K∞ p ∈ R

assigning a label fp to pixel p disagrees with the pixel intensity Ip. This model

is discussed in Sec.IV.B. The last term measures how much assigning a label fp

to pixel p disagrees with the shape information, this is explained in the previous

section.

To segment an object, a graph (e.g. Fig. 6) is constructed and the weight of

each edge is defined as shown in Table (9). Then the optimal segmentation bound-

ary between the object and its background is obtained by finding the minimum

cost cut on this graph. The minimum cost cut is computed exactly in polyno-

mial time for two terminal graph cuts with positive edges weights via s/t Min-

Cut/Max-Flow algorithm [53].

C. Experiments

The proposed segmentation framework is tested on a data set of DCE-MRI

of human kidney. To segment a kidney slice, one can follow the following sce-

nario. The given image is aligned with the aligned training images. The gray level

marginal densities of the kidney and its background are approximated using the

proposed LCG model with positive and negative components. Fig. 56(a) shows the

original image, (b) shows the aligned image, (c) illustrates the empirical densities

107

Page 129:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

0 50 100 150 200 2500

0.005

0.01

0.015

0.02

Gray Level

− Empirical Density

. . . Estimated Density

− − − Gaussian Components

(a) (b) (c)

0 50 100 150 200 250

−5

0

5

10

15

x 10−3

50 100 150 200 2500

0.005

0.01

0.015

0.02

− − Estimated Density

− Empirical Density

0 50 100 150 200 2500

0.005

0.01

0.015

0.02

0.025

Background Marginal Density

Object Marginal Density

t=72

(d) (e) (f)

(g) (h) (i)

FIGURE 56 – Gray level probabilistic model for the given image (a) Original im-age (b) aligned image (c) Initial density estimation (d) LCG components (e) Finaldensity estimation (f) Marginal densities. Segmented Kidney (g)Results of graylevel threshold 102.6%(h) Results of Graph cuts without shape constraints 41.9%(i) Proposed approach results 2.5%

108

Page 130:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

as well as the initial estimated density using dominant modes in the LCG model,

(d) illustrates the LCG components, (e) shows the closeness of the final gray level

estimated density and the empirical one. Finally, (f) shows the marginal gray level

densities of the object and back ground with the best threshold. To illustrate the

closeness of the gray level between the kidney and its background, (g) shows the

segmentation using gray level threshold=72. To emphasize the accuracy of the pro-

posed approach, (h) shows the segmentation using the graph cuts technique with-

out using the shape constraints (all the t-links weights will be − log P (Ip | fp)),

and (i) shows the results of the proposed approach.

Samples of the segmentation results for different subjects are shown in Fig-

ures 57 and 58, (a) illustrates the input images, (b) shows the results of graph cuts

technique without shape constraints, and the results of the proposed approach are

shown in (c).

Evaluation: to evaluate the results the percentage segmentation error from

the ground truth (manual segmentation produced by an expert) is calculated as

follows:

error% =100 ∗Number of misclassified pixels

Number of Kidney pixels(78)

For each given image, the binary segmentation is shown as well as the percentage

segmentation error. The misclassified pixels are shown in red color.

The statistical analysis of 33 slices, which are different than the training data

set is shown in Table (10). The unpaired t-test is used to show that the differences

in the mean errors between the proposed segmentation, and graph cut without

shape prior and the best threshold segmentation are statistically significant (the

two-tailed value P is less than 0.0001).

D. Validation

Due to the hand shaking errors, it is difficult to get accurate ground truth

from manual segmentation. Thus to evaluate the proposed algorithm performance,

109

Page 131:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a)

(b)

77.4% 26.4%

(c)

6.7% 5.5%

FIGURE 57 – More segmentation results (a)Original images (b) Results of Graphcuts without shape constraints (c) Proposed approach results

110

Page 132:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a)

(b)

74% 52%

(c)

5.1% 4.6%

FIGURE 58 – More segmentation results (a)Original images (b) Results of Graphcuts without shape constraints (c) Proposed approach results

111

Page 133:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

TABLE 10ACCURACY OF THE PROPOSED APPROACH SEGMENTATION ON 33

SLICES IN COMPARISON TO GRAPH CUT WITHOUT SHAPE ANDTHRESHOLD TECHNIQUE.

Algorithm

Error% Proposed GC TH

Min. 4.0 20.9 38.4

Max. 7.4 108.5 231.2

Mean 5.7 49.8 128.1

Std. 0.9 24.3 55.3

Significance, P < 0.0001 < 0.0001

a phantom shown in Fig. 59(a) is created with topology similar to the human

kidney. Furthermore, the phantom mimics pyramids that exist in any kidney.

The kidney, the pyramids and the background signals for the phantom are gener-

ated according to the distributions shown in Fig. 56(f) using the inverse mapping

method [1]. Fig. 59(b,c) show that the proposed approach is almost 26 times more

accurate than the graph cuts technique without shape constraints.

E. Conclusions

In this chapter, a new segmentation approach [85] that used graph cuts to

combine region and boundary properties of segments as well as shape constraints

was proposed. Shape variations were estimated using a new distance probabilistic

model. To get the optimal segmentation, a new energy function was formulated

using the image appearance models discussed in previous chapters and shape con-

straints. Then, this function was globally minimized using s/t graph cuts. Experi-

mental results showed that the shape constraints overcame the gray level inhomo-

geneities problem and precisely guided the graph cuts to accurate segmentations

(with mean error 5.4% and standard deviation 1.6%) compared to graph cuts with-

112

Page 134:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a)The phantom (b) 19.54% (c)0.76%

FIGURE 59 – Kidney Phantom (a) The phantom (b) Results of Graph cuts withoutshape constraints (c) Proposed approach results

out shape constraints (mean error 62.7% and standard deviation 27.5%).

113

Page 135:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

CHAPTER VII

STEREO MATCHING-BASED HUMAN FACES RECONSTRUCTION

The image labeling can be used as a formulation for diverse computer vi-

sion and image processing applications. In addition to its applicability to image

segmentation and image restoration, the image labeling formulation can be uti-

lized to solve one of the most fundamental problems in computer vision, the stereo

matching problem.

A. Introduction

Stereo matching is an essential problem in computer vision and it has been

studied in a huge number of works (e.g., [8, 44, 49, 51]). Stereo matching is a special

case from a general problem called the image matching problem. The latter can

be formulated as a labeling problem as follows. Let I and I denote two observed

images. Typically, one of these images is chosen to be the reference image I with set

of pixel P . The labeling algorithm assigns each pixel p ∈ P a label (displacement)

fp, such that Ip and Ip+fp are intensities of correspondence pixels. Similar to the

formulation in Sec. II.C, image pixels represent the sites. However, instead of

the gray levels, displacements (∂x, ∂y) in the image spatial domain are used as the

labels. The desired displacement field is the mapping f : P −→ L, where L is the

set of labels (∂1x, ∂

1y), · · · , (∂K

x , ∂Ky ), where K is the number of labels.

Similar to what has been discussed in Sec. II.C, the framework for this prob-

lem can be the search for MAP configurations in a MRF model. The MAP problem

is formulated as minimizing an interaction energy for the model. Two main as-

sumptions are typically used in this problem: (1) the intensity of each pixel Ip is

similar to the intensity of the corresponding pixel in the other image Ip+fp and (2)

114

Page 136:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

the displacement field f should be smooth. Therefore, the desired displacement

field f is equivalent to minimizing the same energy function in Eq. (10) that incor-

porated these assumptions. It is rewritten here:

E(f) =∑

p,q∈NV (fp, fq) +

∑p∈P

D(fp). (79)

A proper method for computing the data penalty term D(fp) is introduced in [8,

51]. This method uses the Birchfield and Tomasi approach [86], which handles

sampling artifacts with slight variation. So D(fp) can be computed as follows [8].

%1(p, fp) = minfp− 1

2≤`≤fp+ 1

2

|Ip − Ip+`|

%2(p, fp) = minp− 1

2≤q≤p+ 1

2

|Iq − Ip+fp |

%(p, fp) = min(%1, %2)

D(fp) = %(p, fp)2. (80)

In the previous formulation of image segmentation, the smoothness term is chosen

to be piecewise constant prior. In contrast, in the matching problem the smooth-

ness term is chosen to be piecewise smooth prior to allow smooth variations in the

displacement field.

V (fp, fq) = min(|fp − fq|, M), (81)

where M is a constant. Note, if M = 1 this leads to piecewise constant prior. If M > 1

this leads to piecewise smooth prior.

Fig. 60 illustrates simple examples for the image matching problem. In each

row in Fig. 60, the left column illustrates the object in the reference image and the

relative position of its candidate in the other image. The latter is illustrated by the

border of the object. After minimizing Eq. (79) using Algorithm 2, in Sec.II.E.3,

and applying the generated displacement fields, the objects are matched as shown

in the right column of Fig. 60.

115

Page 137:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 60 – Image matching results. Left relative positions before matching.Right matching results.

116

Page 138:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 61 – General stereo pair setup. The relation between the depth and thedisparity

B. Stereo Matching

In the classical stereo matching problem, the setup consists of two cameras

observing a static scene. The objective in this problem is to find the pairs of cor-

responding points p and q that result from the projection of the same scene point

into the two images. As shown in Fig. 61, the distance from the scene point to

the cameras is determined by difference in image locations of points p and q. This

difference is called the disparity. The two cameras are called rectified pair if their

positions differ only by a translation in the x-direction. Therefore, the horizontal

disparity px − qx of corresponding pair p and q is inversely proportional to the

depth of corresponding scene point, as shown in Fig. 62. To reconstruct the 3D

shape of an object, one needs to determine the disparities of the correspondences

between pixels of the images. Usually, these disparities are represented as gray

levels in an image that is called the disparity map or depth map. An example of a

depth map is shown in Fig. 63.

Finding the disparity map f for a rectified stereo pair is an image matching

problem (image labeling problem). Where, I and I represent the left and right im-

ages, respectively. The set of label L is the disparity range ∂1x, · · · , ∂K

x . However,

117

Page 139:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 62 – Rectified stereo pair setup. The depth is inversely proportional to thedisparity

(a) (b)

FIGURE 63 – Example of the depth map: (a) one of the image pair and (b) thecorresponding depth map.

118

Page 140:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

the problem formulation discussed in previous section does not encode the con-

straints of the visual correspondence. The uniqueness is one of these constraints,

where each pixel in I corresponds to at most one pixel in I. Note, in the previ-

ous formulation two pixels in I can be mapped to one pixel in I. The occlusion is

another constraint, where some pixels do not have correspondences. In contrast,

each pixel is assigned a label in previous formulation. To overcome these prob-

lems, Kolmogorov in [51] treated the two images symmetrically by computing the

disparities for both images in the same time. In this case, P represents the set of

pixels of both images and f is the labeling of both images. To enforce the visibility

constraints, the author in [51] modified the data penalty term in the energy func-

tion, Eq. (79), such that it is computed only for pixels that have the same disparity

in both images. In other words, if pixel p is located in the left image and pixel q

is located in the right image, then D(fp) = %(p, fp)2δ(fp = fq) where q = p + fp

and p = q + fq (e.g., if fp = ∂1x,, then fq = −∂1

x,). After minimizing the energy

and finding the labeling f , the pixels are considered to be occluded if q = p + fp

and p 6= q + fq. Occluded regions can be filled with the average of their neighbors’

disparities.

1. Human Faces Reconstruction

As an application, the stereo matching approach is used to reconstruct hu-

man faces in a 3D face recognition framework. Fig. 64 illustrates the setup that

is used to capture images. The setup parameters are shown in Table (11). Fig. 65

shows an example for a reconstructed face. More results are shown in Fig. 66.

119

Page 141:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 64 – The system setup

TABLE 11STEREO SETUP PARAMETERS

Range Baseline Zoom Focus Pan/Yaw Tilt/Pitch Roll

(m) B (m) λ(mm) (degree) (degree) (degree)

3 0.6 200 Range 0 0 0

120

Page 142:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

(a)

(b)

FIGURE 65 – Reconstruction results. (a) The stereo pair, (b) left the depth map andright the reconstructed shape.

121

Page 143:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

FIGURE 66 – More reconstruction results. Left: One of the stereo pair. Middle:Frontal view from the reconstructed shape. Right: Side view from the recon-structed face

122

Page 144:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

CHAPTER VIII

CONCLUSION AND FUTURE WORK

This dissertation addressed the image labeling problem. More specifically,

it focused on image modeling, which is a very important component in the im-

age labeling system. The dissertation proposed accurate mathematical models for

image appearance and shape modelS in order to describe objects-of-interest in the

images.

• An intensity model, which estimates the marginal density for each class in

the given image was modeled using a new unsupervised technique based on

maximizing a derived joint likelihood function.

• Spatial interaction that describes the relation between pixels in each class

was modeled using a Markov-Gibbs random field with Potts prior where the

parameters of the model were analytically estimated. Statistical results of

more than two thousand synthetic images confirmed the robustness of the

proposed analytical estimation approach over conventional methods.

• A new shape model was proposed. In this model, the shape variations be-

tween an object and its candidates were estimated using a new probabilistic

model based on a Poisson distribution.

The image appearance models were used in a novel framework for auto-

matic multimodal gray scale image labeling. A joint MGRF model was used to

describe the input image and its desired map with more accurate model identifi-

cation. No user interaction was needed; instead, the image was initially labeled

using the proposed intensity model. An energy function using the appearance

models was formulated, and was globally minimized using a standard graph cuts

123

Page 145:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

approach. Experimental results showed that without optimizing any tuning pa-

rameters, the proposed approach was fast, was robust to noise, and gave accurate

results compared to the state-of-the-art algorithms.

To exploit the modeling capability of high order MRFs and the efficiency of

pairwise MRF solvers, this dissertation proposed an efficient transform that con-

verts higher order Gibbs energies to a quadratic energies, for binary MRF. This

transformation can be applied on many computer vision problems. In this dis-

sertation it was demonstrated on color images segmentation. The experiments

showed that the proposed approach performance was encouraging.

Another framework, which exploits the appearance models and the shape

model was proposed. To get the optimal segmentation, a new energy function

was formulated using these models and was globally minimized using a standard

graph cuts approach. Experiments confirmed that the shape constraints overcame

the gray levels inhomogeneities problem and precisely guided the graph cuts to

accurate segmentations (with mean error ≈ 5.4% and standard deviation ≈ 1.6%)

compared to graph cuts without shape constraints (mean error ≈ 62.7% and stan-

dard deviation ≈ 27.5%).

A. Directions for Future Research

There are many possible directions in which the work proposed in this dis-

sertation can be extended or enhanced. These include, but are not limited to, the

following:

• The proposed unsupervised framework is limited only for the multimodal

gray scale images. Investigating a general framework that is suitable for a

more general class of gray scale images and color images as well as texture

images is a good extension.

• As in conventional approaches, the proposed work used the standard neigh-

borhood systems (6-neighborhood system in 3D case or 4-neighborhood sys-

124

Page 146:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

tem in 2D case). Studying the effect of selecting important neighbors from a

data base of the object on the labeling result is a possible direction for future

work.

• New methods for high order cliques Gibbs energies reduction can be investi-

gated, such that the generated quadratic energies is submodular. In this case

the optimization problem can be solved in polynomial time.

• The proposed shape model and its segmentation approach depend on aligned

data set. So another possible direction is a graph cuts framework that simul-

taneously does both segmentation and registration processes.

• Anther possible direction that could be investigated is the integration of the

deformable model (active contour and level set models) into the graph cuts

formulation.

125

Page 147:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

REFERENCES

[1] A.A. Farag, A. El-Baz, and G. L. Gimel’farb. Density estimation using modi-fied expectation maximization for a linear combination of gaussians. In Pro-ceedings of ICIP, volume 3, pages 1871 – 1874, 2004.

[2] G. L. Gimel’farb. Image Texture and Gibbs Random Field. Kluwer AcademicPublishers: Dordrecht, 1999.

[3] Daniel Cremers. Statistical Shape Knowledge in Variational Image Segmentation.PhD thesis, University of Mannheim, Mannheim, Germany, 2002.

[4] Y. Y. Boykov and M. P. Jolly. Interactive organ segmentation using graph cuts.In Proceedings of MICCAI, LNCS 1935, pages 276–286, 2000.

[5] Yuri Boykov and Gareth Funka-Lea. Graph cuts and efficient N-D image seg-mentation. International Journal of Computer Vision, 70(2):109–131, 2006.

[6] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905,2000.

[7] J. E. Besag. On the statistical analysis of dirty pictures. Journal of the RoyalStatistical Society, Series B, 48:259–302, 1986.

[8] Olga Veksler. Efficient Graph Based Energy Minimization Methods in ComputerVision. PhD thesis, Cornell University, Ithaca, NY, 1999.

[9] Carsten Rother, Vladimir Kolmogorov, Victor S. Lempitsky, and Martin Szum-mer. Optimizing binary MRFs via extended roof duality. In Proceedings ofCVPR, 2007.

[10] C. C. Chen. Markov Random Field Models in Image Analysis. PhD thesis, Michi-gan State University, East Lansing, 1988.

[11] R. C. Dubes and A. K. Jain. Random field models in image analysis. Journal ofApplied Statistics, 16:131–164, 1989.

[12] A. K. Jain. Advances in mathematical models for image processing. Proceed-ings of the IEEE, 69:502–528, 1981.

[13] V. N. Vapnik. Density Estimation for Statistics and Data Analysis. Chapman andHall, 1986.

[14] V. N. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York,1998.

126

Page 148:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

[15] K. Fukunaga and R. R. Hayes. The reduced parzen classifier. IEEE Transactionson Pattern Analysis and Machine Intelligence, 11:423425, 1989.

[16] B. W. Silverman. Kernel density estimation using the fast fourier transform.Statistical Algorithm, AS176, Applied Statistic, 31:93–97, 1982.

[17] B.W. Jeon and D.A. Landgrebe. Fast parzen density-estimation usingclustering-based branch-and-bound. IEEE Transactions on Pattern Analysis andMachine Intelligence, 16(9):950–954, September 1994.

[18] M. Girolami and C. He. Probability density estimation from optimally con-densed data samples. IEEE Transactions on Pattern Analysis and Machine Intel-ligence, 25(10):1253–1264, October 2003.

[19] T. Moon. The expectation-maximization algorithm. IEEE Signal ProcessingMagazine, 11:47–60, 1996.

[20] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood fromincomplete data via the em algorithm. Journal of the Royal Statistical Society,39B:1–38, 1977.

[21] A.A. Farag, A. El-Baz, and G. L. Gimel’farb. Precise segmentation of multi-modal images. IEEE Transactin Image Processing, 15(4):952–968, 2006.

[22] M. Haindl. Texture synthesis. CWI Quarterly, 4:305–331, 1991.

[23] A. Pentland. Fractal-based description of natural scenes. IEEE Transactions onPattern Analysis and Machine Intelligence, 6:661–674, 1984.

[24] J. Garding. Properties of fractal intensity surfaces. Pattern Recognition Letters,8:319–324, 1988.

[25] J. M. Coggins and A. K. Jain. A spatial filtering approach to texture analysis.Pattern Recognition Letters, 3:195–203, 1985.

[26] G. Smith. Image texture analysis using zero crossings information. PhD thesis,University of Queensland, Australia, 1998.

[27] S. Geman and D. Geman. Stochastic relaxation, gibbs distributions, and thebayesian restoration of images. IEEE Transactions on Pattern Analysis and Ma-chine Intelligence, 6:721–741, 1984.

[28] J. E. Besag. Spatial interaction and the statistical analysis of lattice systems.Journal of the Royal Statistical Society, Series B, 36:192–236, 1974.

[29] G. L. Gimel’farb. Texture modeling with multiple pairwise pixel interactions.IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(11):1110–1114, 1996.

[30] S. C. Zhu, Y. N. Wu, and D. Mumford. Filters, random fields and maximumentropy (FRAME): To a unified theory for texture modeling. International Jour-nal of Computer Vision, 27(2):107–126, 1998.

127

Page 149:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

[31] H. Derin and H. Elliott. Modeling and segmentation of noisy and texturedimages using gibbs random fields. IEEE Transactions on Pattern Analysis andMachine Intelligence, 9(1):39–55, 1987.

[32] Sateesha G. Nadabar and Anil K. Jain. Parameter estimation in markov ran-dom field contextual models using geometric models of objects. IEEE Trans-action on Pattern Analysis and Machine Intelligence, 18(3):326–329, 1996.

[33] Daniel Cremers and Leo Grady. Statistical priors for efficient combinatorialoptimization via graph cuts. In Proceedings of ECCV, pages 263–274, 2006.

[34] Xiangyang Lan, Stefan Roth, Daniel P. Huttenlocher, and Michael J. Black.Efficient belief propagation with learned higher-order markov random fields.In Proceedings of ECCV, pages 269–282, 2006.

[35] M. Leventon, W. L. Grimson, and O. Faugeras. Statistical shape influence ingeodesic active contours. In Proceedings of CVPR, pages 1316–1324, 2000.

[36] Y. Chen, S. Thiruvenkadam, H. Tagare, F. Huang, and D. Wilson. On theincorporation of shape priors into geometric active contours. In IEEE VLSM,pages 145–152, 2001.

[37] X. Huang, D. Metaxas, and T. Chen. Statistical shape influence in geodesicactive contours. In Proceedings of CVPR, pages 496–503, 2004.

[38] A. Tsai, A. Yezzi, W. Wells, C. Tempany, D. Tucker, A. Fan, E. Grimson, andA. Willsky. A shape-based approach to curve evolution for segmentation ofmedical imagery. IEEE Transactin on Medical Imaging, 22(2):137154, 2003.

[39] N. Paragios. A level set approach for shape-driven segmentation and trackingof the left ventricle. IEEE Transactin on Medical Imaging, 22:773–776, 2003.

[40] M. Hassner and J. Shlansky. The use of markov random fields as models oftextures. Computer Graphics and Image Processing, 12:357–370, 1980.

[41] R. W. Picard. Random field texture coding. RR 185, M. I. T., Cambridge, MA,1992.

[42] G. R. Cross and A. K. Jain. Markov random field texture models. IEEE Trans-actions on Pattern Analysis and Machine Intelligence, 5:25–39, 1983.

[43] I. M. Elfadel and R. W. Picard. Gibbs random fields, cooccurrences, and tex-ture modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence,16(1):24–37, January 1994.

[44] Y. Boykov, O. Veksler, and R. Zabih. Fast approximation energy minimizationvia garph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence,23(11):1222–1239, 2001.

[45] V. Kolmogorov and R. Zabih. What energy functions can be minimized viagraph cuts? IEEE Transactions on Pattern Analysis and Machine Intelligence,26(2):147–159, 2004.

128

Page 150:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

[46] Vladimir Kolmogorov. Convergent tree-reweighted message passing for en-ergy minimization. IEEE Transactions on Pattern Analysis and Machine Intelli-gence., 28(10):1568–1583, 2006.

[47] Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Generalized beliefpropagation. In NIPS, pages 689–695, 2000.

[48] Vladimir Kolmogorov and Carsten Rother. Minimizing nonsubmodular func-tions with graph cuts-a review. IEEE Transactions on Pattern Analysis and Ma-chine Intelligence, 29(7):1274–1279, 2007.

[49] Richard Szeliski, Ramin Zabih, Daniel Scharstein, Olga Veksler, Vladimir Kol-mogorov, Aseem Agarwala, Marshall F. Tappen, and Carsten Rother. A com-parative study of energy minimization methods for markov random fields. InProceedings of ECCV, pages 16–29, 2006.

[50] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth andA. H. Teller, andE. Teller. Equations of state calculations by fast computing machines. Jour-nal of Chemical Physics, 21:1087–1091, 1953.

[51] Vladimir Kolmogorov. Graph Based Algorithms for Scene Reconstruction fromTwo or More Views. PhD thesis, Cornell University, Ithaca, NY, 2004.

[52] L. Ford and D. Fulkerson. Flows in Networks. Princeton University Press, 1962.

[53] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Transactions on Pat-tern Analysis and Machine Intelligence, 26(9):1124–1137, 2004.

[54] Endre Boros and Peter L. Hammer. Pseudo-boolean optimization. DiscreteApplied Mathematical, 123(1-3):155–225, 2002.

[55] Y. Y. Boykov and M. P. Jolly. Interactive graph cuts for optimal boundary & re-gion segmentation of objects in N-D images. In Proceedings of ICCV, volume 1,pages 105–112, 2001.

[56] Olivier Juan and Yuri Boykov. Active graph cuts. In Proceedings of CVPR,pages 1023–1029, 2006.

[57] R. W. Picard. Gibbs random fileds: Temperature and parameter analysis. InProceedings of ICASSP, volume III, pages 45–48, San Francisco, March 1992.

[58] D. M. Greig, B. T. Porteous, and A. H. Seheult. Exact maximum a posterioriestimation for binary images. Journal of the Royal Statistical Society, Series B,51(2):271–279, 1989.

[59] A. Blake, C. Rother, M. Brown, P. Perez, and P. H. S. Torr. Interactive imagesegmentation using an adaptive GMMRF model. In Proce. of ECCV, volume 1,pages 428–441, 2004.

129

Page 151:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

[60] Vladimir Kolmogorov and Yuri Boykov. What metrics can be approximatedby geo-cuts, or global optimization of length/area and flux. In Proceedings ofICCV, volume 1, pages 564–571, 2005.

[61] C. Rother, V. Kolmogorov, and A. Blake. GrabCut: Interactive foreground ex-traction using iterated graph cuts. ACM Transactions on Graphics (SIGGRAPH),2004.

[62] M. P. Kumar, P. H. S. Torr, and A. Zisserman. OBJ CUT. In Proceedings ofCVPR, pages 18–25, 2005.

[63] Herve Lombaert, Yiyong Sun, Leo Grady, and Chenyang Xu. A multilevelbanded graph cuts method for fast image segmentation. In Proceedings ofICCV, volume I, pages 259–265, 2005.

[64] J. Keuchel. Multiclass image labeling with semidefinite programming. 2:454–467, 2006.

[65] H. Bozdogan. Model selection and akaike’s information criterion (AIC): Thegeneral theory and its analytical extensions. Psychometrika, 52(3):345–370,1987.

[66] D. Comaniciu and P. Meer. Mean shift: A robust approach toward featurespace analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence,24(5):603–619, 2002.

[67] Asem M. Ali, Ayman S. El-Baz, and Aly A. Farag. A novel framework foraccurate lung segmentation using graph cuts. In Proce. of IEEE ISBI, pages908–911, 2007.

[68] S. Hu, E. A. Hoffman, and J. M. Reinhardt. Automatic lung segmentationfor accurate quantitation of volumetric X-ray CT images. IEEE Transaction onMedical Imaging, 20(6):490–498, June 2001.

[69] Asem M. Ali and Aly A. Farag. Graph cut based segmentation of multimodalimages. In Proceedings, 7th IEEE International Symposium on Signal Processingand Information Technology, (ISSPIT’07), pages 1047–1052.

[70] Asem M. Ali and Aly A. Farag. A novel framework for N-D multimodalimage segmentation using graph cuts. IEEE International Conference on ImageProcessing (ICIP08), To appear.

[71] Asem M. Ali and Aly A. Farag. Density estimation using a new AIC-typecriterion and the EM algorithm for a linear combination of gaussians. IEEEInternational Conference on Image Processing (ICIP08), To appear.

[72] Rupert Paget and I. Dennis Longstaff. Texture synthesis via a noncausal non-parametric multiscale markov random field. IEEE Transaction on Image Pro-cessing, 7(6):925–931, 1998.

[73] Stefan Roth and Michael J. Black. Fields of experts: A framework for learningimage priors. In Proceedings of CVPR, pages 860–867, 2005.

130

Page 152:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

[74] P. Kohli, M. P. Kumar, and P. H. S. Torr. P3 & beyond: Solving energies withhigher order cliques. In Proceedings of CVPR, 2007.

[75] Carsten Rother, Sanjiv Kumar, Vladimir Kolmogorov, and Andrew Blake.Digital tapestry. In Proceedings of CVPR, pages 589–596, 2005.

[76] I. G. Rosenberg. Reduction of bivalent maximization to the quadratic case.Cahiers du Centre d’Etudes de Recherche Operationnelle, 17:71–74, 1975.

[77] Daniel Freedman and Petros Drineas. Energy minimization via graph cuts:Settling what is possible. In Proceedings of CVPR, pages 939–946, 2005.

[78] Jianzhuang Liu Shifeng Chen, Liangliang Cao and Xiaoou Tang. IterativeMAP and ML estimations for image segmentation. In Proceedings of CVPR,2007.

[79] Chad Carson, Serge Belongie, Hayit Greenspan, and Jitendra Malik. Blob-world: Image segmentation using expectation-maximization and its appli-cation to image querying. IEEE Transactions on Pattern Analysis and MachineIntelligence., 24(8):1026–1038, 2002.

[80] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmentednatural images and its application to evaluating segmentation algorithms andmeasuring ecological statistics. In Proceedings of ICCV, volume 2, pages 416–423, July 2001.

[81] Asem M. Ali, G. L. Gimel’farb, and Aly A. Farag. Optimizing binary MRFswith higher order cliques. Submitted to European Conference on Computer Vision(ECCV08).

[82] D. Terzopoulos. Regularization of inverse visual problems involving disconti-nuities. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(2):413–424, 1986.

[83] R. Goldenberg, R. Kimmel, E. Rivlin, and M. Rudzsky. Cortex segmentation:A fast variational geometric approach. IEEE Transactin on Medical Imaging,21(2):1544–1551, 2002.

[84] P. Viola. Alignment by Maximization of Mutual Information. PhD thesis, Mas-sachusetts Inst. of Technology, Cambridge, MA, 1995.

[85] Asem M. Ali, Aly A. Farag, and Ayman S. El-Baz. Graph cuts framework forkidney segmentation with prior shape constraints. In Proceedings of Interna-tional Conference on Medical Image Computing and Computer-Assisted Intervention(MICCAI’07), pages 384–392, Sydney, Australia, 2007.

[86] S. Birchfield and C. Tomasi. A pixel dissimilarity measure that is insensitive toimage sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence,20(4):401–406, 1998.

131

Page 153:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

APPENDIX I

NOMENCLATURE

The following convention is used throughout this dissertation.

P (.) probability mass function

P set of image pixels

n number of image pixels

G set of gray levels

L set of labels

Q number of gray levels in the set G

K number of labels in the set Ll label in the set LI observed image, mapping I : P → G

f , f∗, f different labelling, mapping f : P → Lfp label of the pixel p ∈ PF set of all labelings

Fp random variable defined on a location p

F “random field” set of random variables defined on PN neighborhood system

Np neighborhood of a pixel p

U(f) Gibbs energy

Z normalizing constant in Gibbs distribution

T control parameter called temperature in Gibbs distribution

C set of all cliques

132

Page 154:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

Vc, Vp, Vpq, V () potential functions

γ0 controls the influence of the external field

γ influences the interaction between neighboring pairs or tripels

µ, σ Gaussian distribution parameters θ = (µ, σ)

π prior weight or responsability

m message in BP and TRW-s approaches

Θ vectors of model parameters

T family of the neighboring pixel pairs

or tripels supporting the Gibbs potentials

Niter, A,B constants

Cp,l, wp,r,l parameters of positive and negative components of LCG

G weighted undirected graph

s, t terminals of the graph

V graph vertices

E graph edges

Ec set of edges that constitute a cut

|Ec| cut cost

U ,S sets of pixels

S set of graph nodes belong to source

T set of graph nodes belong to sink

E quadratic energy function

Ek high order energy function

D(.) data penalty term

L(.) Likelihood function

F relative frequency of labels in pixel pairs

ρ ratio |T |/|P|∆, δ() indicators, and indicator function

D AIC criterion

ϕ(.) Gaussian distribution

133

Page 155:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

hs, hc,M EDISON parameters: spatial and color bandwidths and Min. region

eem, nsg, of, th, et NCUTS parameter: elongation parameter for edge map,

number of segments, offset of the symmetric similarity matrix,

symmetric similarity matrix threshold,

and error tolerance in eigenesolver, respectively.

τ execution time

ε relative error

ME set of all global minima of energy E

A sum of all negative coefficients in Ek

ap, apq real coefficients

B, u real numbers

K,R,X object, background, and variability regions

ϑ(.) Poisson distribution

ξ Poisson density rate

Ps Shape image

CKX object/variability contour

dp normal distance from pixel p to CKX

Cdp iso-contour at dp

hdp histogram value at dp

Mt number of training images

I, I stereo pair images

λ camera focal length

∂x, ∂y displacements in image spatial domain

Z scene point’s depth

134

Page 156:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

CURRICULUM VITAE

A. CONTACT INFORMATION

Asem Mohamed Ahmed AliMay, 1976.

2202 James Guthrie ct., Apt. 7Louisville, Kentucky, 40217 [email protected]

B. RESEARCH INTERESTS

Image Processing, Statistical Modeling, Pattern Recognition, Computer Vision, Robotics,Learning Systems, Optimization

C. EDUCATION

University of Louisville, Louisville, Kentucky USA

Ph.D., Electrical & Computer Engineering, May, 2008,• Dissertation Topic: “Image Labeling by Energy Minimization with Appear-

ance and Shape Priors”• GPA: 3.92• Advisor: Aly A. Farag

Assiut University, Assiut, Egypt

M.Sc., Electrical & Computer Engineering, August, 2002• Dissertation Topic: “Intelligent Tracking Control of a D-C Motor”

Assiut University, Assiut, Egypt

B.Sc., Electrical & Computer Engineering, June, 1999• Distinction With Honor, Class Valedictorian.

D. HONORS AND AWARDS

135

Page 157:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

• IEEE Student Member since 2002.

E. PUBLICATIONS

1. Asem M. Ali and Aly A. Farag. ”A Novel Framework For N-D Multi-modal Image Segmentation Using Graph Cuts,” Proceedings, IEEE InternationalConference on Image Processing, (ICIP08), San Diego, California, U.S.A., October2008, To appear.

2. Asem M. Ali and Aly A. Farag. ”Density Estimation Using A New AIC-Type Criterion And The EM Algorithm For A Linear Combination Of Gaussians,”Proceedings, IEEE International Conference on Image Processing, (ICIP08), SanDiego, California, U.S.A., October 2008, To appear.

3. Asem M. Ali, Ayman S. El-Baz, and Aly A. Farag. ”Graph Cuts Frame-work for Kidney Segmentation with Prior Shape Constraints,” Proceedings of In-ternational Conference on Medical Image Computing and Computer-Assisted In-tervention, (MICCAI’07), Brisbane, Australia, October 2007, pp. 384-392.

4. Asem M. Ali, Ayman S. El-Baz, and Aly A. Farag. ”A novel frameworkfor accurate lung segmentation using graph cuts,” Proce. of International Sympo-suim on Biomedical Imaging, (ISBI’07), Arlington, Virginia, April 2007, pp. 908-911.

5. Asem M. Ali and Aly A. Farag. ”Graph Cut Based Segmentation ofMultimodal Images,” Proceedings, 7th IEEE International Symposium on SignalProcessing and Information Technology, (ISSPIT’07), Cairo, Egypt, December 2007,pp. 1047-1052.

6. Ayman El-Baz, Aly A. Farag, Asem M. Ali, Georgy L. Gimel’farb, ManuelCasanova, ”A Framework for Unsupervised Segmentation of Multi-modal Medi-cal Images,” Proc. of the Second International Workshop of Computer Vision Ap-proaches to Medical Image Analysis(CVAMIA’06), Graz, Austria, May 2006, pp.120-131.

7. Ayman El-Baz, Asem M. Ali, A. A. Farag, G. L. Gimel’farb, ”A NovelApproach for Image Alignment Using a Markov-Gibbs Appearance Model,” Proc.of International Conference on Medical Image Computing and Computer-AssistedIntervention, MICCAI’06, Copenhagen, Denmark, October 2006, pp. 734-741.

F. REVIEWER

• MICCAI

G. SOFTWARE PROGRAMMING

11 years of software development experience.

• C (11 years)

• C++ (6 years)

• CORBA Platform (4 years)

136

Page 158:  · ACKNOWLEDGMENTS First of all, my deepest thanks are due to Almighty God, the merciful, the compassionate for uncountable gifts given to me. I would like to express my deepest

• C# (1 year)

• Qt (3 years)

• Matlab (7 years)

• Fortran (4 years)

H. BIOGRAPHY

Asem M. Ali has worked at the computer vision and image processing (CVIP)laboratory as a research assistant for 4 years (2004-2008). During this period, hewas in charge of establishing the main infrastructure of the robotic research inCVIP. He leaded the CVIP robotic research team for autonomous navigation undera grant sponsored by the US DoD. Through the work in this project, he has devel-oped an optical flow-based navigation algorithm, and a Kalman filter-based local-ization algorithm. This work has been approved for funding by NASA throughtwo consecutive grants. Also, he has been assigned different projects in the lab.He was on of the members who developed medical imaging processing tools, theCVIP Lab’s CAD system. In this project, he used his new segmentation algorithms.Currently, he is in charge of building a human face recognition system. In thisproject he uses stereo to reconstruct the 3-D shape of human faces.

I. LANGUAGES

• Arabic (Mother Tongue)

• English Fluent (Read/Write)

• French Fair (Read/Write)

J. MEMBERSHIP

• The president of the Egyptian Student Association in North America (ESANA),Louisville’s Chapter (ESA), May 2006 -2007.

137