markovian and comparison between different convex

6
Markovian Random Fields and Comparison Between Different Convex Criterion Optimization in Image Restoration Jos´ e Ismael de la Rosa * , Member, IEEE, Jes ´ us Villa, and Maria A. Araiza, Signal Processing Laboratory - Engineering Faculty, Universidad Aut´ onoma de Zacatecas, Av. L´ opez Velarde 801, Col. Centro, 98068 Zacatecas, Zac., MEXICO Abstract The present work illustrates some recent alternative meth- ods to deal with digital image reconstruction. This collec- tion of methods are inspired on the use of a class of Markov chains best known as Markov Random Fields (MRF). All of these new methodologies are also based on the prior knowledge of some information which will permit more ef- ficiently modeling the image acquisition process. The meth- ods based on the MRF’s are proposed and analyzed in a Bayesian framework and their principal objective is to elim- inate those effects caused by the excessive smoothness on the reconstruction process of images which are rich in con- tours or edges. In order to respond to the edge preservation, the use of certain convexity criteria are proposed which will lead to obtain adequate weighting of cost functions (half- quadratic) in cases where discontinuities are remarked and, even better, for cases where such discontinuities are very smooth. The final aim is to apply these methods to prob- lems in optical instrumentation. 1. Introduction The use of powerful methods proposed in the seventies (it- erated conditional modes) [2, 3, 9], are nowadays essen- tial at least in the cases of image segmentation and image restoration [1]. The basic idea of these methods is to con- struct a Maximum a posteriori (MAP) of the modes or so called estimator of true images by using Markov Random Fields (MRF) in a Bayesian framework. The evolution of the basic idea has caused the development of new algo- rithms which consider new models of contextual informa- tion which is lead by the MRF’s and the final aim is the restoration of real images (practical data). The idea is based in a robust scheme which could be adapted to reject out- * Corresponding autor,e-mail:[email protected] liers, tackling situations where noise is present in different forms during the acquisition process. The image restoration approaches or recuperation of an image to its original condition given a degraded image, passes by reverting the effects caused by a distortion func- tional which must be estimated. In fact, the degradation characteristics is a crucial information and it must be sup- posed known or estimated during the inversion procedure. Typically this is a point spread function (PSF) from the dis- tortion which can be linked with the probability distribution of the noise contamination, in the case of MAP filters, usu- ally the additive Gaussian noise is considered. There is an- other source of information which imposes a key rule in the image processing context, this is the contextual or spatial in- formation, that represents the likelihood or correlation be- tween the intensity values of a neighborhood of pixels well specified. The modelling when using MRF take into account such spatial interaction and it was introduced and formal- ized in [2] where it is shown the powerfulness of these sta- tistical tools [3, 4, 5, 9, 20]. Combining both kinds of infor- mation in an statistical framework, the restoration is lead by an estimation procedure given the maximum a posteriori of the true images when the distortion functionals are known. The implemented algorithms were developed considering a slightly degraded signal, where the resulting non-linear re- cursive filters show excellent characteristics to preserve all the details contained in the image, and on the other hand, they smooth the noise components. The section 2 describes the general definition of an MRF and the proposal of the MAP estimator. The potential func- tions must be obtained or proposed to conduct adequately the inversion process, such functions are described in sec- tion 3 where the convexity is the key to formulate an ade- quate criterion to be minimized. In sections 4 and 5 are dis- cussed briefly the MAP estimators resulting from different MRF structures and some illustrative results. Finally in sec- tion 6 are given some partial conclusions and comments. 17th International Conference on Electronics, Communications and Computers (CONIELECOMP'07) 0-7695-2799-X/07 $20.00 © 2007

Upload: meomuop2508

Post on 24-Sep-2015

213 views

Category:

Documents


0 download

DESCRIPTION

Markovian Random Fields and Comparison Between Different Convex for denoising

TRANSCRIPT

  • Markovian Random Fields and Comparison Between Different Convex CriterionOptimization in Image Restoration

    Jose Ismael de la Rosa, Member, IEEE, Jesus Villa, and Maria A. Araiza,Signal Processing Laboratory - Engineering Faculty,

    Universidad Autonoma de Zacatecas, Av. Lopez Velarde 801, Col. Centro,98068 Zacatecas, Zac., MEXICO

    Abstract

    The present work illustrates some recent alternative meth-ods to deal with digital image reconstruction. This collec-tion of methods are inspired on the use of a class of Markovchains best known as Markov Random Fields (MRF). Allof these new methodologies are also based on the priorknowledge of some information which will permit more ef-ficiently modeling the image acquisition process. The meth-ods based on the MRFs are proposed and analyzed in aBayesian framework and their principal objective is to elim-inate those effects caused by the excessive smoothness onthe reconstruction process of images which are rich in con-tours or edges. In order to respond to the edge preservation,the use of certain convexity criteria are proposed which willlead to obtain adequate weighting of cost functions (half-quadratic) in cases where discontinuities are remarked and,even better, for cases where such discontinuities are verysmooth. The final aim is to apply these methods to prob-lems in optical instrumentation.

    1. Introduction

    The use of powerful methods proposed in the seventies (it-erated conditional modes) [2, 3, 9], are nowadays essen-tial at least in the cases of image segmentation and imagerestoration [1]. The basic idea of these methods is to con-struct a Maximum a posteriori (MAP) of the modes or socalled estimator of true images by using Markov RandomFields (MRF) in a Bayesian framework. The evolution ofthe basic idea has caused the development of new algo-rithms which consider new models of contextual informa-tion which is lead by the MRFs and the final aim is therestoration of real images (practical data). The idea is basedin a robust scheme which could be adapted to reject out-

    Corresponding autor,e-mail:[email protected]

    liers, tackling situations where noise is present in differentforms during the acquisition process.

    The image restoration approaches or recuperation of animage to its original condition given a degraded image,passes by reverting the effects caused by a distortion func-tional which must be estimated. In fact, the degradationcharacteristics is a crucial information and it must be sup-posed known or estimated during the inversion procedure.Typically this is a point spread function (PSF) from the dis-tortion which can be linked with the probability distributionof the noise contamination, in the case of MAP filters, usu-ally the additive Gaussian noise is considered. There is an-other source of information which imposes a key rule in theimage processing context, this is the contextual or spatial in-formation, that represents the likelihood or correlation be-tween the intensity values of a neighborhood of pixels wellspecified. The modelling when using MRF take into accountsuch spatial interaction and it was introduced and formal-ized in [2] where it is shown the powerfulness of these sta-tistical tools [3, 4, 5, 9, 20]. Combining both kinds of infor-mation in an statistical framework, the restoration is lead byan estimation procedure given the maximum a posteriori ofthe true images when the distortion functionals are known.The implemented algorithms were developed considering aslightly degraded signal, where the resulting non-linear re-cursive filters show excellent characteristics to preserve allthe details contained in the image, and on the other hand,they smooth the noise components.

    The section 2 describes the general definition of an MRFand the proposal of the MAP estimator. The potential func-tions must be obtained or proposed to conduct adequatelythe inversion process, such functions are described in sec-tion 3 where the convexity is the key to formulate an ade-quate criterion to be minimized. In sections 4 and 5 are dis-cussed briefly the MAP estimators resulting from differentMRF structures and some illustrative results. Finally in sec-tion 6 are given some partial conclusions and comments.

    17th International Conference on Electronics, Communications and Computers (CONIELECOMP'07)0-7695-2799-X/07 $20.00 2007

  • 2. Markov random fields and MAP estima-tion

    The problem of image estimation (e.g. restoration) into aBayesian framework deals with the solution of an inverseproblem, where the estimation process is carried out in awhole stochastic environment. All variables presented alongthe text are, x: which represent a Markov random field (orimage to be estimated), y: represents the observed imagewith noise and distorted, and x: is the estimator of x withrespect to data y, and p() is a probability density function.The most popular estimators used nowadays are:

    Maximum Likelihood estimator (ML): this estima-tor produces excessive noise. An ill posed prob-lem is presented when the quantity of data is poor.Of course, it is important to exploit all known in-formation or so called prior information about anyprocess under study which gives a better estima-tor called

    Maximum A Posteriori (MAP) estimator:

    xMAP = argminxX

    p(x|y)= argmax

    xX(log p(y|x) + log g(x)) , (1)

    in this case, the estimator is regularized by using aMarkov random field function g(x) which model allprior information as a whole probability distribution,where X is the set of pixels x capable to minimizep(x|y), and p(y|x) is the likelihood function from ygiven x.

    The Markov random fields (MRF) can be represented in ageneric way by using the following equation:

    g(x) =1Zexp

    (cC

    Vc(x)

    ), (2)

    where Z is a normalization constant, C is a set of cliquesc or local neighborhood of pixels, and Vc(x) is a weight-ing function given over the local neighborhood. Generally,the cliques correspond to the set of neighborhoods of pix-els. An introduced theorem by Hammersley-Clifford [2, 9]probes the equivalence between the Gibbs distribution andthe MRFs. The Markov random fields have the capability torepresent various images sources. The main disadvantage ofthe use of MRFs is that the estimation procedure is lead un-der the local minimization schemes (overall grows the timeof computation), and the proposal of global minimizationschemes has been recently introduced by modifying the lo-cal structure of the MRFs. There is a variety of MRF mod-els which depend on the cost function also known potentialfunctions that can be used. Each potential function char-acterizes the interactions between pixels in the same local

    group. As an example, the following family represent con-vex functions:

    {i,j}C([xi xj ]), (3)

    where is a constat parameter to be selected, and x belongto the local group of pixels.

    3. Generalizaed GMRF and half-quadraticfunctions

    In recent woks [5][14] some new potential functions wereintroduced, such proposed functions are semi-quadraticfunctionals or half-quadratic and them characterize cer-tain convexity into the penalization term [10, 11](eg.extension to penalization) which permit to build effi-cient and robust estimators in the sense of data preserva-tion which is linked to the original or source image, alsothe the time processing decreases with respect to other pro-posed schemes. On the other hand, in previous worksfrom Alison Gibbs [13] it has been proposed to ob-tain the posterior distribution of the images, in this caseit is necessary to use of sophisticated stochastic simula-tion techniques based on the Markov Chain Monte Carlo(MCMC) [15, 19]. If is possible to obtain the poste-rior distribution of any image, thus, it is possible to samplefrom such posterior distribution and obtain the MAP es-timator, or other estimators such as the median estimatorwhich is sensible to be equal to the MAP estimator un-der certain circumstances imposed by the noise structure,the MAP and the median estimators searches the princi-pal mode of the posterior distribution.

    In the present paper some potential functions are com-pared, the proposed generalized Guassian MRF introducedby Bouman in [4] is compared with respect to semi-Huber[8], Welch, and Tukey potential functions, these two lastfunctions were used in recent works of Rivera [16][18]proving excellent performance.

    3.1. Generalizaed GMRF

    If one consider to generalize the Gaussian MRF (when p =q = 2 in equation (16), one has a GMRF) as proposed byBouman, one can redefine

    = [xi xj ]. (4)

    where the generalized potential functions can be changed as

    () = ||p , 1 < p < 2 (5)

    17th International Conference on Electronics, Communications and Computers (CONIELECOMP'07)0-7695-2799-X/07 $20.00 2007

  • obtaining the GGMRF

    log g(x) = psS

    asxps +

    {s,r}C

    bsr|xs xr|p+cte,

    (6)where cte is a constant, and the weighting parameters as >0 and bsr > 0. In practice it is recommended to take as = 0thus, the unicity of xMAP, can be assured by

    log g(x) = p {s,r}C

    bsr|xs xr|p+ cte, (7)

    log p(y|x) is strictly convex and so xMAP is continuous iny, and with respect to the power p. The choice of p is capi-tal, since the selection of p constrains the convergence speedof the global estimator, and the quality of the restored im-age.

    3.2. Semi-Huber potential function

    To assure completely the robustness diminishing at the sametime the convergence speed, the Huberlike norm or semiHuber potential function can be used as described in [8],this proposal is adjusted in this work in two dimensions bythe following equation (SH):

    log g(x) = {s,r}C

    bsr1(x) +

    {s,r}Cbsr1(x)

    +cte,(8)

    where

    1(x) =202

    (1 +

    41(x)20

    1), (9)

    where 0 > 0 is a constant value. On the other hand, 1(x)can be the quadratic function described by 1(x) = e2,where e = (xs xr). The potential function can also beconsidered as half-quadratic (HQ) functional.

    3.3. Welch potential function

    Known as a hard redescender potential function, and usedby Rivera in [16]

    log g(x) =

    {s,r}Cbsr1(x)

    +(1 )

    {s,r}Cbsr2(x)

    + cte, (10)

    where k is a scale parameter and

    2(x) = 1 12k exp(k1(x)) (11)

    This function is also half-quadratic such as the Tukey func-tion.

    3.4. Tukey potential function

    This is another hard redescender potential function, alsoused by Rivera in [16]

    log g(x) =

    {s,r}Cbsr1(x)

    +(1 )

    {s,r}Cbsr3(x)

    + cte, (12)

    where

    3(x) ={

    1 (1 (2e/k)2)3, for |e/k| < 1/2,1, otherwise.

    (13)and where k is also a scale parameter.

    4. MAP estimators and practical convergence

    In this section some estimators are deduced, the single prob-lem of filtering noise to restore an observed signal Y lead toestablish the estimators, the observation equation could be

    Y = X + nZ, where Z is Gaussian noise. (14)

    The MAP estimator for this particular problem when usingthe GGMRF is given by,

    xMAP1 = argminxX

    {sS

    |ys xs|q

    +qp

    {s,r}Cbsr|xs xr|p

    ,(15)

    where the power q is given according with the hypothesismade about the noise contamination (q = 2 for Gaussiannoise), and bsr is chosen according to the number ofneighbors. Thus, the minimization problem leads to con-sider various methods:

    global iterative techniques such as: the descendent gra-dient, conjugate gradient, Gauss-Seidel, Over-relaxedmethods, etc.

    or local minimization techniques: minimization at eachpixel xs (which generally needs more time, but aremore precise).

    Local techniques were used in this work, where the expec-tation maximization (EM) was not implemented, since allparameters included into the potential functions were cho-sen heuristically or according to values proposed in some

    17th International Conference on Electronics, Communications and Computers (CONIELECOMP'07)0-7695-2799-X/07 $20.00 2007

  • references. For example, the local MAP estimator for theGGMRF is given by

    xs = argminxX

    {|ys xs|q + qp

    rs

    brs|xs xr|p},

    (16)where according to the value of parameters p and q the per-formance of such estimator varies. For example if p = q =2, the obtained estimator is similar to the least squares onesince the likelihood function is quadratic, with an additionalterm of penalization which degrades the estimated image

    xs =ys + ()2

    rs brsxr

    1 + ()2

    rs brs. (17)

    On the other hand, in the case of p = q = 1, the criterion isabsolute, and the estimator converges to the median estima-tor which in practice is difficult to implement

    xs = median (ys, xr1 , . . . , xrI ) , (18)

    this criterion is not differentiable and this fact causes in-stability in the minimization procedure. For intermediatevalues of p and q the estimators become sub-optimal, andthe use of iterated methods can be used to minimize theobtained criterions, such iterative methods are the subgradient, or the LevenbegMarquardt method of MATLAB7 which was used in this work. The local or global condi-tion of the estimator depends on

    1) if one has values of 1 < p < 2: the estimator xmin.loc xmin.glob, which means that a local minimum wouldcoincide with a global minimum,

    2) moreover, if p 6= q, the criterion is not homogeneous,but: x(y, ) = x(y, 1q/p), assuring the conver-gence and existence of the estimator which is continu-ous with respect to p.

    A second MAP estimator can be obtained when using thesemiHuber potential, the global estimator can be describedby the equation

    xMAP2 = argminxX

    sS

    |ys xs|2 + {s,r}C

    bsr

    1(x) +

    {s,r}Cbsr1(x)

    .(19)

    as in the previous estimator, it has been proposed to imple-ment the local estimator which leads to a similar expressionas the equation (16) for the first local MAP estimator.

    The third MAP estimator is obtained when using theWelch potential function, that is,

    xMAP3 = argminxX

    sS

    |ys xs|2 +

    {s,r}Cbsr

    1(x) + (1 )

    {s,r}Cbsr2(x)

    .(20)

    And finally, the four MAP estimator is deduced from theTukey potential function, giving the following global esti-mator

    xMAP4 = argminxX

    sS

    |ys xs|2 +

    {s,r}Cbsr

    1(x) + (1 )

    {s,r}Cbsr3(x)

    .(21)

    The use of a prior distribution function based on thelogarithm, with any degree of convexity and quasi-homogeneous permits to consider a variety of possiblechoices of potential functions. May be, the most impor-tant challenges that must be well resolved are: the ade-quate selection of hyper-parameters from potential func-tions, where different versions of the EM algorithms try totackle this problem [5, 7], another is the minimization pro-cedure which in any sense will regulate the convergencespeed as proposed in [11, 16].

    5. Denoising experiments

    Continuing with the problem of filtering noise some esti-mation results are presented when images are only contam-inated by Gaussian noise, and there are no other type of dis-tortions, the first experiment was made considering the nextmodel

    Y = X + nZ, Z N (0, I2n), n = 2,where I is the identity matrix. The results are compared us-ing different values for p and preserving q = 2, some im-ages where used to probe the performance of the presentedestimators. Here are presented some results based on theanalysis of the standard image of Lena, different levelsof noise were added to the image: Z N (0, I2n), withn = 4 this level of noise does not degrade visually the im-age (see Figure 1), but increasing the value of n = 8 theobtained degradation is perceptible and difficult to elimi-nate (see Figure 2) where the performance of the MAP1 es-timator depends on the choice of p and also the level ofnoise. In the case of Figure 3 some visual results when us-ing the other three MAP estimators are given, the perfor-mance of the MAP2 is equal or better than the MAP1, but

    17th International Conference on Electronics, Communications and Computers (CONIELECOMP'07)0-7695-2799-X/07 $20.00 2007

  • the time processing is far better, since for best results ob-tained for MAP1 were when p = 1.1 and the time was ap-proximately 1200 seconds ,while for the MAP2, when us-ing a value for 0 = 15, the computation time was 172seconds (the image dimensions are 128 128 pixels). Theother two estimators have a good performance in the senseof the computation time, but the tuning of hyper-parametersis a drawback which may be attended to obtain better per-formance in the sense of restoration quality. For example, inFigure 3 the MAP3 estimation results seems worst than theMAP2, for this case the next values = 15, k = 1000 and = 0.045 were used into the potential function and the ob-tained time of computation was 155 seconds. For the caseof the Tukey potential function, the same problem as thetuning problem presented for the Welsh potential must besolved to obtain better performance as depicted by Riverain [16], the obtained results presented here consider for theMAP4 estimator the next values for = 35, k = 1000 and = 0.05. This last time of computation was of 198 sec-onds.

    In the case of the first estimator (MAP1) obviously theperformance depends on the choice of p and q, the best per-formance in the sense of the restoration quality is presentedwhen p 1, but unfortunately, the time of computationgrows exponentially. On the other hand, the use of half-quadratic potential functions permits more flexibility for thetime of computation, but still is a challenge to tune correctlythe hyper-parameters in order to obtain better performancein the sense of restoration quality, may be the most simplepotential function to tune is the semi-Huber, but our expe-rience over this function is reflected in a large use of it forrobust estimation. In the case of the Welsh and Tukey func-tionals the tuning problems must be resolved implementingin correct ways more sophisticated algorithms based on theexpectation maximization method, this last task remains asfuture implementation, that means that this work is placedas one of our objectives. Some interesting applications ofrobust estimation are particulary focused in phase recoveryfrom fringe patterns as presented in recent work [21], andphase unwrapping, in this sense some filtering results werealso obtained using the MAP presented estimators.

    6. Conclusions and comments

    Some advantages on the use of GGMRF are: the continu-ity of the estimator is assured in function of the data valueswhen 1 < p 2. The edge preserving is also assured, overall when p 1 and obviously depends on the choice be-tween the interval 1 < p < 2. The robustness is also as-sured in the same sense. The use of semi-quadratic or half-quadratic potential functions also present some advantages:the convexity of this functions is relaxed with respect tothe GGMRF, this fact gives as result the decrement of the

    a)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    b)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    c)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    d)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    Figure 1. a) Lena original image (128128), b)Image with low level noise, c) MAP1 estima-tion with p = 1.5 and q = 2, d) MAP1 estima-tion with p = 1.2 and q = 2, = 10.

    time of computation, which is a good advantage over theGGMRF, but the tuning of hyper-parameters is most com-plicated, since one has more degrees of freedom. In the caseof the semi-Huber potential function the tuning is less com-plicated and of course, the estimator manipulation is farsimplest than the Welsh and Tukey, however this problemcan be solved as argued by Idier [5] and Rivera [16] im-plementing more sophisticated algorithms. Moreover, theadvantages presented by those estimators could overcomethe disadvantages experimenting largely with them, and es-tablishing the aim to implement more sophisticated algo-rithms. The finality is to build a serial of software tools forimage analysis focused for instance to optical instrumenta-tion tasks such as those treated by the work of Villa in [21].

    Acknowledgements

    Many thanks to PROMEP of Mexico, this work was par-tially supported by the Mexican Program for ProfessorsTechnical Improvement (PROMEP) under Grant UAZACPTC 24-103.5/03/1127.

    References

    [1] H. C. Andrews, and B. R. Hunt, Digital image restoration,New Jersey, Prentice-Hall, Inc., 1977.

    [2] J. E. Besag, Spatial interaction and the statistical analysisof lattice systems, J. Royal Stat. Soc. Ser. B, Vol. B-36, pp.192236, 1974.

    [3] J. E. Besag, On the statistical analysis of dirty pictures, J.Royal Stat. Soc. Ser. B, Vol. B-48, pp. 259302, 1986.

    17th International Conference on Electronics, Communications and Computers (CONIELECOMP'07)0-7695-2799-X/07 $20.00 2007

  • [4] C. Bouman, and K. Sauer, A Generalizaed Gaussian ImageModel for Edge-Preserving MAP Estimation, IEEE Trans.on Nuclear Science, Vol. 99, No. 4, pp. 11441152, 1992.

    [5] K. Champagnat, and J. Idier, A Conection Between Half-Quadratic Criteria and EM Algorithms, IEEE SignalProcessing Letters, Vol. 11, No. 9, pp. 709712, Sept. 2004.

    [6] P. Ciuciu, J. Idier, and J.-F. Giovannelli, Regularized Es-timation of Mixed Spectra Using Circular Gibbs-Markov

    a)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    b)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    c)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    d)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    Figure 2. a) Lena original image, b) MAP1 es-timation with p = 1.1 and q = 2, c) MAP1 es-timation with p = 1.2 and q = 2, d) MAP1 es-timation with p = 1.5 and q = 2, = 5, andn = 8 .

    a)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    b)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    c)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    d)

    20 40 60 80 100 120

    20

    40

    60

    80

    100

    120

    Figure 3. a) Image with noise, n = 8, b) MAP2Estimation, c) MAP3 estimation, and d) MAP4estimation.

    Model, IEEE Trans. on Signal Processing, Vol. 49, No. 10,pp. 22022213, Oct. 2001.

    [7] P. Ciuciu, and J. Idier, A half-quadratic block-coordinatedescent method for spectral estimation, Journal of SignalProcessing, Vol. 82, pp. 941959, 2002.

    [8] J. I. De la Rosa, and G. Fleury, Bootstrap methods fora measurement estimation problem, IEEE Trans. Instrum.Meas., Vol. 55, No. 3, pp. 820827, June 2006.

    [9] S. Geman, and C. Geman, Stochastic relaxation, Gibbs dis-tribution, and the Bayesian restoration of images, IEEETrans. Pattern Anal. Machine Intell., Vol. PAMI-6, pp. 721741, Nov. 1984.

    [10] S. Geman, and G. Reinolds, Constrained restoration and therecovery of discontinuities, IEEE Trans. Pattern Anal. Ma-chine Intell., Vol. 14, pp. 367383, Mar. 1992.

    [11] S. Geman, and C. Yang, Nonlinear image recovery withhalf-quadratic regularization, IEEE Trans. Image Process-ing, Vol. 4, pp. 932946, July 1995.

    [12] J.-F. Giovannelli, J. Idier, R. Boubertakh, and A. Herment,Unsupervised Frequency Tracking Beyond the Nyquist Fre-quency Using Markov Chains, IEEE Trans. on SignalProcessing, Vol. 50, No. 12, pp. 29052914, Dec. 2002.

    [13] A. L. Gibbs, Convergence of Markov Chain Monte Carlo al-gorithms with applications to image restoration, Ph.D. The-sis, Department of Statistics, University of Toronto, URL :www.utstat.toronto.edu, 2000.

    [14] J. Idier, Convex Half-Quadratic Criteria and InteractingAuxiliary Variables for Image Restoration, IEEE Trans. onImage Processing, Vol. 10, No. 7, pp. 10011009, July 2001.

    [15] R. M. Neal, Probabilistic inference using Markov ChainMonte Carlo methods, Tech. Rep., CRG-TR-93-1, Depart-ment of Computer Science, University of Toronto, URL :www.cs.toronto.edu/radford, 1993.

    [16] M. Rivera, and J. L. Marroquin, Efficent half-quadratic reg-ularization with granularity control, Image anv Vision Com-puting, Vol. 21, pp. 345357, 2003.

    [17] M. Rivera, and J. L. Marroquin, Half-quadratic cost func-tions for phase unwrapping, Optics Letters, Vol. 29, No. 5,pp. 504506, 2004.

    [18] M. Rivera, Robust phase demodulation of interferogramswith open or closed fringes, J. Opt. Soc. Am. A, Vol. 22, No.6, pp. 11701175, 2005.

    [19] C. P. Robert, and G. Casella, Monte Carlo Statistical Meth-ods, Springer Verlag, 2nd Edition 2004.

    [20] K. Sauer, and C. Bouman, Bayesian Estimation of Trans-mission Tomograms Using Segmentation Based Optimiza-tion, IEEE Trans. on Image Processing, Vol. 2, No. 3, pp.296310, July 1993.

    [21] J. J. Villa, J. I. De la Rosa, G. Miramontes, and J. A. Quiroga,Phase recovery from a single fringe pattern using an orien-tational vector field regularized estimator, J. Opt. Soc. Am.A, Vol. 22, No. 12, pp. 27662773, 2005.

    17th International Conference on Electronics, Communications and Computers (CONIELECOMP'07)0-7695-2799-X/07 $20.00 2007