joint blind restoration and surface recovery in photometric stereo

11
Joint blind restoration and surface recovery in photometric stereo Manjunath V. Joshi and Subhasis Chaudhuri Department of Electrical Engineering, Indian Institute of Technology–Bombay, Mumbai 400076, India Received August 20, 2004; accepted October 18, 2004 We address the problem of simultaneous estimation of scene structure and restoration of images from blurred photometric measurements. In photometric stereo, the structure of an object is determined by using a particu- lar reflectance model (the image irradiance equation) without considering the blurring effect. What we show is that, given arbitrarily blurred observations of a static scene captured with a stationary camera under different illuminant directions, we still can obtain the structure represented by the surface gradients and the albedo and also perform a blind image restoration. The surface gradients and the albedo are modeled as separate Markov random fields, and a suitable regularization scheme is used to estimate the different fields as well as the blur parameter. The results of the experimentations are illustrated with real as well as synthetic images. © 2005 Optical Society of America OCIS codes: 100.0100, 100.3020, 150.0150, 150.6910, 120.0120, 120.5240. 1. INTRODUCTION Shape or depth recovery is a classic problem in computer vision. The aim is to get the three-dimensional (3D) struc- tural information from one or more two-dimensional (2D) images. Techniques to recover the shape are called shape- from-X techniques, where X can be shading, stereo, mo- tion, texture, etc. Among these the photometric stereo (PS) and the shape-from-shading (SFS) methods recover the shape from gradual variation in shading. Whereas the PS method employs two or more observations under dif- ferent illuminations or light-source positions, the SFS method provides shape estimation from a single-intensity image captured under a single light-source position. In many practical applications, PS has been found to yield better shape estimates. It requires that the complete area of the object be illuminated by the use of different light- source positions and hence gives better results with a larger number of images taken under different illuminant positions. PS estimates the slope of the surface by mea- suring how the images’ intensity varies with the direction from which they are illuminated. As discussed in Section 2, researchers traditionally treat the SFS problem without considering the blur intro- duced by the camera. However, when one captures the im- ages with a camera, the degradation in the form of blur and noise is often present in these observed image inten- sities. Some of the low-end commercial cameras fail to set the autofocus properly when the illumination is not bright—typically the case in photometric measurements. Similarly, when the illuminant direction is changed for subsequent shots, the camera tries to readjust the focus (although there is no need for it, as the object and the camera are both stationary), and focusing error does creep in. It is natural that the variations in image inten- sity due to camera blur affect the estimates of the surface shape. Thus the estimated shape differs from the true shape in spite of the possibility of having knowledge of the true surface-reflectance model. This limits the appli- cability of these techniques in three-dimensional (3D) computer-vision problems. It is to be mentioned here that all the existing ap- proaches in the literature assume a pinhole model that in- herently assumes that there is no camera blur during ob- servation. However, the blur could occur for a variety of reasons such as improper focus setting, as mentioned above, or camera jitter. This motivates us to restore the image as well, while recovering the structure. The prob- lem can then be stated as follows: Given a set of blurred observations of a static scene taken with different light- source positions, obtain the true depth map and the al- bedo of the surface as well as restore the images for dif- ferent light-source directions; i.e., estimate the true structure and also the images. Since the camera blur is not known, in addition we need to estimate the point- spread function (PSF) of the blur that caused the degra- dation. In this paper we assume a point light-source illu- mination with known source directions and an orthographic projection. The problem can be classified as a joint blind-restoration and surface-recovery problem. Since such a problem is inherently ill-posed, we need suit- able regularization of all the fields to be estimated, i.e., surface gradients as well as albedo. We show in detail that the entire problem can be expressed as a simple problem of regularization and hence can be solved itera- tively with existing mathematical tools. This paper is organized as follows. In Section 2 we re- view some of the previous studies on PS and image resto- ration. In Section 3 we discuss how one can model the de- graded image formation from photometric observations. The regularization-based approach to deriving the cost function for the estimation of surface gradients, albedo, and the restoration of the image is the subject matter of Section 4. Results of experimentation on real as well as synthetic images are presented in Section 5 to illustrate the capabilities of our approach, and we conclude with a summary in Section 6. 1066 J. Opt. Soc. Am. A/Vol. 22, No. 6/June 2005 M. V. Joshi and S. Chaudhuri 1084-7529/05/061066-11/$15.00 © 2005 Optical Society of America

Upload: subhasis

Post on 01-Oct-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Joint blind restoration and surface recovery in photometric stereo

1Svtift(tPfmimboslpsf

tdaastbSs(ccssst

1066 J. Opt. Soc. Am. A/Vol. 22, No. 6 /June 2005 M. V. Joshi and S. Chaudhuri

Joint blind restoration and surface recoveryin photometric stereo

Manjunath V. Joshi and Subhasis Chaudhuri

Department of Electrical Engineering, Indian Institute of Technology–Bombay, Mumbai 400076, India

Received August 20, 2004; accepted October 18, 2004

We address the problem of simultaneous estimation of scene structure and restoration of images from blurredphotometric measurements. In photometric stereo, the structure of an object is determined by using a particu-lar reflectance model (the image irradiance equation) without considering the blurring effect. What we show isthat, given arbitrarily blurred observations of a static scene captured with a stationary camera under differentilluminant directions, we still can obtain the structure represented by the surface gradients and the albedo andalso perform a blind image restoration. The surface gradients and the albedo are modeled as separate Markovrandom fields, and a suitable regularization scheme is used to estimate the different fields as well as the blurparameter. The results of the experimentations are illustrated with real as well as synthetic images. © 2005Optical Society of America

OCIS codes: 100.0100, 100.3020, 150.0150, 150.6910, 120.0120, 120.5240.

cc

phsrailosbfsnsdmoaSastpt

vrgTfaSsts

. INTRODUCTIONhape or depth recovery is a classic problem in computerision. The aim is to get the three-dimensional (3D) struc-ural information from one or more two-dimensional (2D)mages. Techniques to recover the shape are called shape-rom-X techniques, where X can be shading, stereo, mo-ion, texture, etc. Among these the photometric stereoPS) and the shape-from-shading (SFS) methods recoverhe shape from gradual variation in shading. Whereas theS method employs two or more observations under dif-

erent illuminations or light-source positions, the SFSethod provides shape estimation from a single-intensity

mage captured under a single light-source position. Inany practical applications, PS has been found to yield

etter shape estimates. It requires that the complete areaf the object be illuminated by the use of different light-ource positions and hence gives better results with aarger number of images taken under different illuminantositions. PS estimates the slope of the surface by mea-uring how the images’ intensity varies with the directionrom which they are illuminated.

As discussed in Section 2, researchers traditionallyreat the SFS problem without considering the blur intro-uced by the camera. However, when one captures the im-ges with a camera, the degradation in the form of blurnd noise is often present in these observed image inten-ities. Some of the low-end commercial cameras fail to sethe autofocus properly when the illumination is notright—typically the case in photometric measurements.imilarly, when the illuminant direction is changed forubsequent shots, the camera tries to readjust the focusalthough there is no need for it, as the object and theamera are both stationary), and focusing error doesreep in. It is natural that the variations in image inten-ity due to camera blur affect the estimates of the surfacehape. Thus the estimated shape differs from the truehape in spite of the possibility of having knowledge ofhe true surface-reflectance model. This limits the appli-

1084-7529/05/061066-11/$15.00 © 2

ability of these techniques in three-dimensional (3D)omputer-vision problems.

It is to be mentioned here that all the existing ap-roaches in the literature assume a pinhole model that in-erently assumes that there is no camera blur during ob-ervation. However, the blur could occur for a variety ofeasons such as improper focus setting, as mentionedbove, or camera jitter. This motivates us to restore themage as well, while recovering the structure. The prob-em can then be stated as follows: Given a set of blurredbservations of a static scene taken with different light-ource positions, obtain the true depth map and the al-edo of the surface as well as restore the images for dif-erent light-source directions; i.e., estimate the truetructure and also the images. Since the camera blur isot known, in addition we need to estimate the point-pread function (PSF) of the blur that caused the degra-ation. In this paper we assume a point light-source illu-ination with known source directions and an

rthographic projection. The problem can be classified asjoint blind-restoration and surface-recovery problem.

ince such a problem is inherently ill-posed, we need suit-ble regularization of all the fields to be estimated, i.e.,urface gradients as well as albedo. We show in detailhat the entire problem can be expressed as a simpleroblem of regularization and hence can be solved itera-ively with existing mathematical tools.

This paper is organized as follows. In Section 2 we re-iew some of the previous studies on PS and image resto-ation. In Section 3 we discuss how one can model the de-raded image formation from photometric observations.he regularization-based approach to deriving the cost

unction for the estimation of surface gradients, albedo,nd the restoration of the image is the subject matter ofection 4. Results of experimentation on real as well asynthetic images are presented in Section 5 to illustratehe capabilities of our approach, and we conclude with aummary in Section 6.

005 Optical Society of America

Page 2: Joint blind restoration and surface recovery in photometric stereo

2SftspdspfitpmpauaasiAtrtuftocTatetf

rfiribtvaboaAoilmgtitcsPfds

evtttatap

ictSrfitttfbdehtprcedtoascrfim

srceehwobl

3Cpstcjtra

M. V. Joshi and S. Chaudhuri Vol. 22, No. 6 /June 2005/J. Opt. Soc. Am. A 1067

. RELATED WORKhading is an important cue for human perception of sur-

ace shape. Researchers in computer vision have at-empted to use the shading information to recover the 3Dhape. Horn was one of the first researchers to study thisroblem by casting it as a solution to second-order partialifferential equations.1 The SFS problem is typicallyolved with four different approaches: regularization,ropagation, local, and linear. Ikeuchi and Horn2 were therst to use the energy-minimization technique by usinghe brightness and the smoothness constraints. Theropagation approach is basically the characteristic stripethod proposed by Horn.1 Pentland3 used the local ap-

roach to recover the shape information by using the im-ge intensity and its first and the second derivatives. Hesed the assumption that the surface is locally sphericalt each point. Linear approaches proposed by Pentland4

nd by Tsai and Shah5 linearize the reflectance map andolve for the shape. It is well known that the SFS problems ill-posed and hence the solution may not be reliable.lso, most of the traditional SFS algorithms assume that

he surface has constant albedo values. This assumptionestricts the class of recoverable images. The researchershus attempted to solve the problem of shape recovery bysing PS with multiple images to provide additional in-ormation for robust shape recovery. Even though some ofhe details of the local surface characteristics may be lostwing to the least-squares approach in PS, the global ac-uracy, in most cases, is better than in the SFS method.he idea of PS was initially formulated by Woodham6,7

nd later applied by others.8,9 They used multiple imageshat were taken by using distant light sources with differ-nt light-source directions. They considered both Lamber-ian and non-Lambertian reflectance models of the sur-ace.

The authors of Ref. 10 propose two robust shape-ecovery algorithms, using PS images. They combine thenite-triangular-surface model and the linearized-eflectance image-formation model to express the imagerradiance. The recovery of albedo values for color imagesy using PS is considered by Chen et al.11 They showedhat the surfaces rendered by using the calculated albedoalues are more realistic than surfaces rendered by usingconstant albedo. The authors of Refs. 12–14 use a cali-

ration object of known shape with a constant albedo tobtain nonlinear mapping between the image irradiancend the surface orientation in the form of a lookup table.neural-network-based approach to PS for a rotational

bject with a nonuniform reflectance factor is consideredn Ref. 15. In Ref. 16 shape from PS with uncalibratedight sources is discussed. The recovery of the surface nor-

al in a scene with use of the images produced under aeneral lighting condition that may be due to a combina-ion of point sources, extended sources, and diffused light-ng, is considered by Basri and Jacobs.17 They assumehat all the light sources are isotropic and distantly lo-ated from the object. To improve the performance of thehape recovery, the SFS algorithm is integrated with theS in Ref. 18. Here the recovered albedo and the depth

rom photometric stereo are used in SFS to obtain a betterepth estimate. A different method for obtaining the ab-olute depth from multiple images captured with differ-

nt light-source positions was presented in Ref. 19. It in-olves solving a set of linear equations and is applicableo a wide range of reflectance models. Another approacho PS where the input images are matched through an op-ical flow is presented in Ref. 20. Recently PS has beenpplied by several researchers to the analysis, descrip-ion, and discrimination of surface textures.21–24 It haslso been applied to the problems of machine inspection ofaper25 and identification of machined surfaces.26

We now discuss, very briefly, a few of the research stud-es carried out on image restoration and blind image de-onvolution. The general approaches for image restora-ion include stochastic and deterministic methods.tochastic approaches assume that the original image is aealization of a random field, usually a Markov randomeld (MRF). For a comprehensive survey of various digi-al image-restoration techniques published before 1997,he reader is referred to Ref. 27. More details about res-oration in the presence of unknown defocus blur can beound in Refs. 28 and 29. A plethora of methods have alsoeen proposed to solve the problem of blind imageeconvolution.30–32 A variant of the maximum-likelihoodstimation, i.e., the expectation-maximization algorithm,as been used for blur identification and image restora-ion in Refs. 33 and 34. Some of the deterministic ap-roaches based on projection-onto-convex-sets iterativeestoration techniques are discussed in Refs. 35–39. Re-ently Candela et al. used local spectral inversion of a lin-arized total variation model for denoising andeblurring.40 The model here consists of a system of par-ial differential equations obtained as a local linearizationf a variational problem. Reconstruction of degraded im-ges with impulsive noise is considered in Ref. 41. An un-upervised edge-preserving image restoration is dis-ussed by Bedini et al.42 They model the image to beestored as a MRF and use a mixed-annealing algorithmor maximum a posteriori restoration that is periodicallynterrupted to compute the maximum likelihood esti-

ates of the MRF parameters.As discussed above, researchers have treated the

hape-estimation and shape-restoration problems sepa-ately. Also, for shape estimation with use of the shadingue, the blur introduced by the camera is never consid-red. We demonstrate in this paper that the shape-stimation and the shape-restoration problems can beandled jointly in a unified framework. In other words,e extract the depth values by considering the effect of

bservation blur and at the same time restore the imagey identifying the blur parameter, using a suitable regu-arization approach.

. OBSERVATION MODELonsider a scene illuminated with different light-sourceositions. Figure 1 shows the setup for capturing thecene by using the PS method. Here we consider that bothhe object and the camera positions are stationary. Weapture the images with a large distance between the ob-ect and the camera, thus making a reasonable assump-ion of orthographic projection and neglect the depth-elated perspective distortions. The light source isssumed to be a distant point light. Under this assump-

Page 3: Joint blind restoration and surface recovery in photometric stereo

tdstt(i

wsrts−Wbmt+pattttlotseof

wibc

Htfrtttw

rte

wblo(cavvtptfttobcza

wpbfmikvtp

4SAAmaewp

1068 J. Opt. Soc. Am. A/Vol. 22, No. 6 /June 2005 M. V. Joshi and S. Chaudhuri

ion the incident light rays can be characterized by a unitirectional vector. This vector is constant for each light-ource position. Now, given an ensemble of images cap-ured with different light-source positions and using theheory of PS, we can express the intensity of the imageimage irradiance) at a point by using the image-rradiance equation as43

fsx,yd = rsx,ydR„psx,yd,qsx,yd… = rsx,ydnsx,yd · s, s1d

here n is the unit surface normal at a point on the objecturface, s is the unit vector defining the light-source di-ection, and r is the albedo or the surface reflectance ofhe surface. The surface gradients sp ,qd are used topecify the unit surface normal given by n= s−p ,q ,1dT / s1+p2+q2d1/2, and R is the reflectance function.e concentrate on the Lambertian model in our study,

ut the method can also be expanded to other reflectanceodels with appropriate changes to handle the estima-

ion of surface normals. Here s= s−ps ,−qs ,1dT / s1+ps2

qs2d1/2 is the unit vector of the light-source direction

ointing toward the source. The surface gradients sp ,qdnd the albedo can be recovered by using a minimum ofhree observations provided that the three equations dueo the measurements are linearly independent. In prac-ice, one often uses more than three observations owing tohe inconsistency of the measurement equations, and aeast-squares solution is sought. The solution to Eq. (1)btained by using the different measurements gives therue surface gradients and the albedo in the least-squaresense only when we do not consider the camera blur. How-ver, owing to improper focus setting or camera blur, thebservations are often blurred. Thus, considering the ef-ect of blur, we can express the observed image as

gsx,yd = hsx,yd p fsx,yd + hsx,yd,

here hsx ,yd represents the two-dimensional PSF of themaging system and hsx ,yd is an additive noise introducedy the system. Considering k light-source positions, wean then express the observed images as

Fig. 1. Observation system for photometric stereo.

gmsx,yd = hsx,yd p fmsx,yd + hmsx,yd, m = 1, . . . , k.

s2d

ere fmsx ,yd corresponds to the actual shaded image dueo the mth light-source position. From Eq. (1), fmsx ,yd is aunction of r, p, q. It may be noted that since there is noelative movement between the camera and the object,he PSF remains the same for all the observations. Notehat we assume here that there is no chromatic aberra-ion due to the lens in the observations. If it does exist, itill affect each color channel differently.If fm is the N231 lexicographically ordered vector rep-

esenting the image irradiance for mth light-source posi-ion, then with Eq. (2), the observed images can be mod-led as

gm = Hssdfmsr,p,qd + hm, m = 1, . . . , k, s3d

here Hssd is an N23N2 matrix with s representing thelur parameter. fm is the true focused image for the mthight-source position and is of size N231. fm is a functionf the surface gradients and the albedo as seen from Eq.1). In this paper we assume that the blur is due to theamera’s being out of focus, which can then be modeled bypillbox blur or by a Gaussian PSF characterized by the

ariance s2 (Ref. 44). We assume that the blur is space in-ariant. This assumption is tantamount to assuming thathe variation in depth in the object is very small com-ared with its distance from the camera, which meanshat the object size cannot be too big, as it has to be far offrom both the camera and the light source. Also, sincehere should be no ambient light for the reflectance mapo be valid, the method does not have any application inutdoor scene modeling. Given the above constraint, Hecomes a block-Toeplitz matrix representing the linearonvolution operator. hm is the N231 noise vector that isero mean i.i.d, and hence the multivariate noise prob-bility density function is given by

Pshmd =1

s2psh2dN2/2

expS−1

2sh2 hm

T hmD , s4d

here sh2 denotes the variance of the noise process. Our

roblem now is to estimate the blur parameter s, the al-edo r, and the surface gradients p and q and also to per-orm blind image restorations given the observations gm,

=1, . . . , k. This is an ill-posed inverse problem, requir-ng a suitable regularization. One may employ variousinds of regularization techniques including that of totalariation, each having its own advantages and disadvan-ages. We use the MRF prior for regularization in this pa-er.

. SIMULTANEOUS ESTIMATION OFTRUCTURE AND BLIND RESTORATION. Stochastic Model of Fieldss we are using a regularization-based approach for si-ultaneous estimation of different parameter fields (p, q,

nd r), we need to use suitable priors for the fields to bestimated. The MRF provides a convenient and consistentay of modeling context-dependent entities such as imageixels, depth of the object, and other spatially correlated

Page 4: Joint blind restoration and surface recovery in photometric stereo

fiited=tGwggt

Gftaeatt

wsiscp

BOfi(tutsl

F

itrs3vtidsUea

wtct

Trdatorqtt

wtsfp

CWstaeecsebmbt

sFt

M. V. Joshi and S. Chaudhuri Vol. 22, No. 6 /June 2005/J. Opt. Soc. Am. A 1069

eatures. This is achieved through characterizing mutualnfluence among such entities by using the MRF model-ng. The practical use of MRF models is ascribed largelyo the equivalence between MRFs and Gibbs distributionsstablished by Hammersley and Clifford.45 Let Z be a ran-om field over an arbitrary N3N lattice of sites Lhsi , jd u0ø i , jøN−1j. From the Hammersley–Clifford

heorem,46 which proves the equivalence of an MRF and aibbs distribution, we have PsZ=zd=1/Zp expf−Uszdg,here z is a realization of Z, Zp is the partition functioniven by oz expf−Uszdg, and Uszd is the energy functioniven by Uszd=ocPCVcszd. Vcszd denotes the potential func-ion of clique c, and C is the set of all cliques.

The lexicographically ordered field z satisfying theibbs density function is now written to have the density

unction Pszd=1/Zp expf−ocPCVcszdg. For it to be possibleo employ a simple and fast minimization technique suchs gradient descent, it is desirable to have a convex en-rgy function. To this end we consider pairwise cliques onfirst-order neighborhood and impose a quadratic cost

hat is a function of finite-difference approximations ofhe first-order derivative at each pixel location; i.e.,

ocPC

Vcszd = mok=1

N−2

ol=1

N−2

fszk,l − zk,l−1d2 + szk,l − zk−1,ld2

+ szk,l+1 − zk,ld2 + szk+1,l − zk,ld2g = Uszd, s5d

here m represents the penalty for departure from themoothness in z. We use this particular energy functionn our studies in order to impose the smoothness con-traint on the solution. Any other form of energy functionan also be used without changing the solution modalityroposed here.

. Structure and Image Recovery with Known Blurnce we define appropriate prior models for the differentelds p, q, and r and the image-formation model equation1) as an example, we need to apply a suitable regulariza-ion scheme to obtain an accurate solution. For an easiernderstanding, we first consider the case in which we es-imate the albedo and the surface gradients and also re-tore the image when the blur is known. This will be re-axed in the Subsection 4.C.

The proposed method can now be illustrated throughig. 2. We obtain k observations of a static scene by vary-

ig. 2. Illustration of the proposed method for image and struc-ure recovery when the blur PSF is known.

ng the direction of the point light source. It is assumedhat the directions are known. We also assume that theeflectance model is known. One can now obtain a least-quares estimate of the surface gradients „ps0dsx ,yd , qs0d

sx ,yd… and the albedo rs0dsx ,yd (assuming that the obser-ations are free of blur), which are used as the initial es-imates. As expected, these estimates are quite poor ow-ng to blurred observations. We introduce the contextependencies in the estimated fields by modeling them aseparate MRFs. Thus the corresponding priors are Uspd,sqd, and Usrd. Using Eq. (5), we then use the following

nergy function Uswd for each of the fields Uspd, Usqd,nd Usrd.

Uswd = mwok=1

N−2

ol=1

N−2

fswk,l − wk,l−1d2 + swk,l − wk−1,ld2

+ swk,l+1 − wk,ld2 + swk+1,l − wk,ld2g, s6d

here w=p, q, or r. Here mw is a penalty term for depar-ure from smoothness. Thus, considering the brightnessonstraint term and the smoothness term for regularizinghe solution, we can express the final cost function as

e = om=1

k

igm − Hssdfmsr,p,qdi2 + Uspd + Usqd + Usrd.

s7d

his cost function is convex and can be minimized withespect to the fields p, q, and r by using a gradient-escent method. The initial estimates ps0d, qs0d, and rs0d

re used here to speed up the convergence. Having ob-ained the estimated fields p, q, and the albedo r, one canbtain the restored image for a particular light-source di-ection by using Eq. (1). Since the surface gradients p and

are already estimated, it is straightforward to restorehe depth dsx ,yd of the scene, which is obtained by solvinghe following iterative equation (see Ref. 43 for details),

¹2dsx,yd = pxsx,yd + qysx,yd, s8d

here px and qy now represent the derivatives of the es-imated surface gradients in the x and y directions, re-pectively. It should be noted here that we are not per-orming direct image deconvolution, which is highly ill-osed and leads to numerical instability.

. Blur Estimatione now extend the proposed algorithm to a more realistic

ituation in which the blur PSF is also unknown. In ordero do that, we must estimate the amount by which an im-ge is blurred. When the images are captured with a cam-ra, the blur phenomenon can occur for various reasonsven when the camera is stationary, such as improper fo-using or camera jitter during imaging. Thus it is neces-ary to estimate the blur while restoring the image andstimating the structure. Considering that the unknownlur is due to the effect of improper focusing, it can beodeled by a Gaussian PSF when we need to estimate the

lur parameter s (standard deviation) that determineshe severity of the blur.

We estimate the blur by using a simple approach asuggested by Subbarao.47 Since the blur is due mostly to

Page 5: Joint blind restoration and surface recovery in photometric stereo

csPawtt

c

(t

wtfoof

wi

DT4iBbd

spttt

csawatmTwometsibgaitdsdksrveoctlsPd

Fs

1070 J. Opt. Soc. Am. A/Vol. 22, No. 6 /June 2005 M. V. Joshi and S. Chaudhuri

amera defocus, the PSF can be easily parameterized by aingle parameter s (see Ref. 44 for details). Hence theSF estimation problem simplifies drastically. Let gsx ,ydnd fsx ,yd be two images with s being the blur parameter,here gsx ,yd represents the blurred image and fsx ,yd is

he true focused image. Then gsx ,yd can be expressed inerms of fsx ,yd by a simple convolution operation as

gsx,yd = hsx,y;sd p fsx,yd. s9d

Using the fact that blur PSF hsx ,y ;sd is Gaussian, wean express it in the Fourier domain as

Hswx,wyd = expF− swx2 + wy

2ds2

2G . s10d

Now taking the Fourier transform on both sides of Eq.9), we can easily derive the following equation for the es-imated blur as

s2 =− 2

wx2 + wy

2 logGswx,wyd

Fswx,wyd, s11d

here G and F are the Fourier transforms of the func-ions g and f, respectively. Measuring the Fourier trans-orm at a single point swx ,wyd is, in principle, sufficient tobtain the value of s2, but a more robust estimate can bebtained by taking the average over a small area in therequency domain; i.e.,

s2 =1

AE E

R

− 2

wx2 + wy

2 logGswx,wyd

Fswx,wyddwxdwy, s12d

here R is a small region in the frequency domain and As the area of R.

. Structure Recovery and Blind Image Restorationhe blur-estimation technique as discussed in Subsection.C gives the estimate of blur only when the true focusedmage fsx ,yd and its blurred version gsx ,yd are available.ut our problem is to estimate the blur given only thelurred observations, since the true focused images forifferent light-source positions are unknown. In this sub-

ig. 3. Illustration of the proposed method for image and structhows that we have added another block, “Estimate blur,” to est

ection we describe an approach for estimating the blurarameter and the structure simultaneously, given onlyhe blurred photometric observations. As already men-ioned, for blind image restoration we use a Gaussian blurhat can be parameterized by the standard deviation s.

The proposed iterative approach for joint structure re-overy and blind image restoration can be obtained byuitably modifying the block diagram given in Fig. 2. Thepproach is presented schematically in Fig. 3. Using PS,e obtain the least-squares estimates of the fields p, q,nd r that serve as the initial estimate, as before. The op-imization [Eq. (7)] is carried out with these initial esti-ates by using an initial value of the blur parameter ss0d.he cost function in Eq. (7) is minimized for p, q, and r,ith ss0d kept constant. Having estimated p, q, and r, we

btain a revised estimate of s as follows. The new esti-ates of fields psnd, qsnd, rsnd are used in image irradiance

quation (1) along with the source directions to get the es-imates of the focused images for different light-source po-itions. We then obtain the new blur estimate ssnd by us-ng Eq. (12), holding psnd, qsnd, rsnd constant. Here thelurs s are calculated between the observed images

1,g2, . . . , gk and the image estimates f1snd , f2

snd , . . . , fksnd,

nd the average value of the estimated blur parameter ss used as the updated one. The new value of ssnd obtainedhus is then used again in the optimization [Eq. (7)] to up-ate the fields p, q, and r. The blur parameter and thetructure (along with the images for different light-sourceirections) are then estimated in an alternative way byeeping the blur parameter constant and updating thetructure and vice versa. The estimations of the blur pa-ameter and the different fields are carried out until con-ergence is obtained in terms of the update for the param-ter ssnd. The blur thus obtained is the final estimatedne. The corresponding gradient fields are then used toalculate the depth map. It should be mentioned here thathe mask size chosen for the PSF should be sufficientlyarge compared with the value of s. Typically we have theize be larger than 6s. Since s is not known, we set theSF kernel size to be 13313 pixels, as the defocus blururing the experimentation is rarely expected to exceed

covery when the blur PSF is unknown. A comparison with Fig. 2he blurs between sg , fsn+1dd, sg , fsn+1dd, …, and sg , fsn+1dd.

ure reimate t

1 1 2 2 k k
Page 6: Joint blind restoration and surface recovery in photometric stereo

sadmfpb

o

d

s

fs

g

Angpoeq5gqiHdsgwowug

5AIsrm

eFlgtfiswpsfiebiwbfsti

gdcd−msa5etOisdtAt

Fpva

M. V. Joshi and S. Chaudhuri Vol. 22, No. 6 /June 2005/J. Opt. Soc. Am. A 1071

=2 pixels. Further, one may note that we have made nottempt to perform any deconvolution of the observedata after having estimated the blur parameter. Experi-entally we found that such an effort always leads to per-

ormance that is inferior compared with that of the pro-osed method. The complete procedure is summarizedelow in terms of the steps involved.

Step 1. Obtain initial estimates ps0d, qs0d, rs0d using PSn blurred data.

Step 2. Choose an initial blur parameter ss0d.Step 3. Set n=0.Step 4. Update the albedo and the scene structure:

hpsn+1d,qsn+1d,rsn+1dj ← arg minp,q,r

om=1

k

igm − Hssdfmsr,p,qdi2

+ Uspd + Usqd + Usrd.

Step 5. Resynthesize focused images f1 , f2 , . . . , fk forifferent light-source positions using Eq. (1).Step 6. Estimate the blurs between sg1 , f1

sn+1dd ,g2 , f2

sn+1dd , . . ., and sgk , fksn+1dd using Eq. (12):

si2 =

1

AE E

R

− 2

wx2 + wy

2 logGiswx,wyd

Fiswx,wyddwxdwy,

or the ith pair and calculate the average value of blurˆ sn+1d for k observations.

Step 7. Set n=n+1 and go to step 4 until the conver-ence in the estimate of s is obtained.

Step 8. Solve for depth using Eq. (8):

¹2dsx,yd = pxsx,yd + qysx,yd.

comment about the convergence of the proposed tech-ique is now in order. Like many similar optimization al-orithms wherein one alternately estimates two sets ofarameters by freezing one of the sets and updating thether set, a global convergence cannot be proved. How-ver, it has been shown in Ref. 48 that the computation isuite stable and converges to a local minima. Since stepsand 6 in the above description involve nonlinearities, a

ood initial estimate may be required for obtaining auality solution. As per the observation of Ref. 48, even annitial estimate of s=0 provides a good starting point.owever, we have carried out extensive experiments un-er varying initial conditions and different measurementets and have never experienced any difficulty in conver-ence. We have also experimented on simulated data setshen the observation noise is quite high and the amount

f defocus blur is large. Under such taxing circumstancese found the estimate of the blur parameter s to be a bitnderestimated. Apart from the above case, the conver-ence of the proposed method has been very satisfactory.

. EXPERIMENTAL RESULTS. Experiments with Known Blur

n this section we present some of our experimental re-ults to demonstrate the efficacy of the proposedegularization-based approach. First we show the experi-ents on real images for image restoration, depth recov-

ry, and albedo estimation with known (simulated) blur.or this experiment we considered observations for eight

ight-source positions for the computation of the surfaceradients and the albedo. Care was taken to make surehat the aperture was very small (for a larger depth ofeld) and that images were all sharply in focus. This con-traint will be removed in the subsequent experiments inhich the observations will involve an unknown PSF. Thearameter for the gradient-descent algorithm, i.e., thetep size, is chosen as 0.01 for the estimation of all threeelds, namely, p, q, and r. The same value is used in allxperiments in this section. The camera blur is simulatedy using a uniform circular blur mask; i.e., the capturedmages with different light-source positions are convolvedith the uniform blur mask which are then used aslurred observations. This blur approximates an out-of-ocus blur as a pillbox function, and is used in many re-earch simulations.27 Here the blur is parameterized inerms of the window size and is modeled as a uniform-ntensity distribution within a circular disk of radius r,

hsx,yd = 5 1

pr2 if Îx2 + y2 ø r

0 otherwise6 .

First we consider an object for which the imaged sceneives a smooth intensity variation but has arbitraryepth variations. Figure 4(a) shows one of the eight fo-used images (before being blurred) of the fluffy stuffedog Jodu of size 2353235, with source position sps=0.3639,qs=−0.5865d. The blurred image produced with aask of size 535, i.e., r=2, for the same source position is

hown in Fig. 4(b). The corresponding restored Jodu im-ge produced with the proposed approach is shown in Fig.(a). It can be observed that because of the blurring thedge details are lost [see Fig. 4(b)]. We note that these de-ails are very well recovered with the proposed approach.bserve the shadow and the bisecting line on the tongue

n the restored image. For the sake of comparison, wehow the restored image obtained through direct imageeconvolution (with the MATLAB function “deconvlucy”hat follows the Lucy–Richardson algorithm) in Fig. 5(b).s expected, the result obtained under direct deconvolu-

ion is poor in comparison with the proposed approach

ig. 4. (a) Focused image of Jodu captured with a light-sourceosition sps=−0.3639,qs=−0.5865d. (b) Simulated blurred obser-ation of Jodu obtained by using a mask size of 535 for the im-ge shown in (a).

Page 7: Joint blind restoration and surface recovery in photometric stereo

[gmtp

fbdwuTToispmm8

ptdraiT

dt7tTd

ttTsftdwi

aowtsststFun

F4ko

Fst

FmJo

Ft

1072 J. Opt. Soc. Am. A/Vol. 22, No. 6 /June 2005 M. V. Joshi and S. Chaudhuri

see Fig. 5(a)]. See the nose, tongue and the hind leg re-ions of the stuffed dog and compare the performance. Itay be noted that although we restore the images for all

he source positions, we show the restored image for aarticular light-source direction only.Figures 6(a) and 6(b) show the depth map as obtained

rom PS by using the focused images (not suffering fromlur) and the true albedo, respectively. The depth map isisplayed as an intensity variation. The depth valuesere calculated with the estimated values of p and q bysing Eq. (8) and then were scaled between 0 and 255.he brighter the area is, the nearer it is to the camera.he depth map produced by using the surface gradientsbtained from standard PS of the blurred images is shownn Fig. 7(a), and the corresponding surface albedo ishown in Fig. 7(b). This result does not account for theresence of blur in the observations. The depth map esti-ated from blurred observations by using the proposedethod and the recovered albedo are displayed in Figs.

(a) and 8(b), respectively.It can be seen clearly that the depth estimated with the

roposed technique is very similar to the depth map dueo focused images shown in Fig. 6(a). The distortion intro-uced in the depth map shown in Fig. 7(a) is considerablyemoved in Fig. 8(a). The depth distortion near the chestnd the mouth region is clearly visible. We observed anmprovement in terms of the mean squared error as well.he mean squared error calculated between the true

ig. 5. (a) Restored Jodu image for the observation given in Fig.(b) obtained by using the proposed approach when the blur isnown. (b) Restored Jodu obtained by the deconvolutionperation.

ig. 6. (a) True depth map obtained from PS by using the ob-ervations not suffering from blurring. (b) Recovered albedo ob-ained by using nonblurred or focused images.

epth map [Fig. 6(a), which does not have blurring] andhe depth map due to blurred Jodu observations [Fig.(a)] was 0.0784, whereas it decreased to just 0.0005 forhe depth map obtained by using the proposed approach.hus there is a substantial improvement in the recoveredepth map with the proposed approach.We now look at how well the albedo is recovered with

he proposed approach. It can be seen from Fig. 6(b) thathe shadows do not affect the computation of the albedo.his figure represents the true albedo subject to the objecturface that satisfies the assumption of a Lambertian sur-ace. Comparison of Figs. 7(b) and 8(b) clearly indicateshat owing to the blurring process, the recovered albedooes not give the true reflecting property of the surfacehen the blurring effect is not compensated. The result-

ng albedo is very smooth.To test the performance of our algorithm for a higher

mount of blur, we next considered a blurring mask sizef 939 with r=4. We observed that our algorithm worksell even for a higher amount of blur, as is evident from

he results. For this experiment we considered images ofize 1483128, a region that shows only the face of thetuffed dog, and the number of images was kept as eight,he same as used in the previous experiment. Figure 9(b)hows the simulated blurred observation corresponding tohe focused (pinhole approximation) image depicted inig. 9(a). The image restored for the same observation bysing the proposed approach is shown in Fig. 10(a). Weotice again that there is considerable improvement in

ig. 7. (a) Depth map obtained by using the standard PSethod on blurred observations. (b) Albedo for the surface of

odu with a blur of 535 mask obtained by using the standard PSn blurred observations.

ig. 8. (a) Recovered depth map obtained by using the proposedechnique utilizing the knowledge of PSF. (b) Estimated albedo.

Page 8: Joint blind restoration and surface recovery in photometric stereo

tcarCpsi

wmssbtsAtsw

BWwaauwp

is(ivnusnrtemto

Fsts

Fa

FDo

Fdt

M. V. Joshi and S. Chaudhuri Vol. 22, No. 6 /June 2005/J. Opt. Soc. Am. A 1073

he reconstructed images. The high-frequency details arelearly restored, as is evident from the protruding nosend eye boundaries of the dog. In Fig. 10(b) we show theesult of image restoration with direct deconvolution.omparing this image with that obtained by using theroposed approach shown in Fig. 10(a), once again we ob-erve that the reconstruction with the proposed approachs much better.

The estimated depth map (Fig. 11) is also comparableith the true depth map shown in Fig. 12(a). The depthap calculated with the standard PS method by using the

urface gradients recovered from the blurred images looksmooth, lacking depth variation [see Fig. 12(b)}. The al-edo map recovered by using the proposed technique andhe albedo map estimated by using PS on the blurred ob-ervations are shown in Fig. 13(b) and 13(a), respectively.s can be seen from the figures, the recovered albedo es-

imated from the blurred observations appears toomooth, indicating that the albedo is not well estimatedhen the observations are blurred and are not rectified.

. Experiments with Unknown Blure now present the results for the more general casehere the blur is unknown and needs to be estimatedlong with the estimation of scene structure and the im-ge restoration. Here we consider the experiment thatses the real images as well as the synthesized ones. Firste consider the use of synthetic images for validationurposes. For this experiment we generated a set of eight

ig. 9. (a) Focused image of Jodu captured with light-source po-itions sps=−0.8389,qs=−0.7193d. (b) Simulated blurred observa-ion of Jodu obtained by using a mask size of 939 for the imagehown in (a).

ig. 10. Restored Jodu image by using (a) the proposed methodnd (b) direct image deconvolution.

mages of a spherical surface for different light-source po-itions. The sphere had a checkerboard-patterned albedosee Fig. 14). The obtained images are then blurred by us-ng a Gaussian blur mask of size 737 with a standard de-iation of s=1 and were corrupted by adding a Gaussianoise of zero mean and a standard deviation of 0.01. Fig-re 14(a) displays the synthesized image for a sphericalurface for a light-source position ps=0.45, qs=0.80. Weote that the observed image irradiance lies within theange 0 to 1. The blur is not assumed to be known duringhe restoration process. We used an initial s of 0.6 for thisxperiment and a mask size of 13313. The final esti-ated s for this experiment is 1.0352, which is very close

o the actual defocus blur. Figure 15 shows the efficacy ofur algorithm for the estimation of true texture from the

Fig. 11. Reconstructed depth map obtained by using the pro-posed technique.

ig. 12. (a) True depth map without considering the blur. (b)epth map obtained from direct application of PS on blurredbservations.

ig. 13. (a) Surface albedo for Jodu computed from blurredata. (b) Estimated albedo obtained by using the proposedechnique.

Page 9: Joint blind restoration and surface recovery in photometric stereo

bqcai

rcttsc

bulbstwtaSs

oaesureg

l1ia

Fs(=

Fp

Fd

Fdsp

FMb

F

1074 J. Opt. Soc. Am. A/Vol. 22, No. 6 /June 2005 M. V. Joshi and S. Chaudhuri

lurred observation [see Fig. 14(b)]. The restored image isuite similar to the true image. We see that the boundaryurves on each segment in the restored checkerboard im-ge are sharper than in the blurred observation, indicat-ng the restoration of high-frequency details.

The estimated depth map [Fig. 16(b)] is also quite cor-ect, as the intensity is highest at the center and de-reases as we move away from it, which definitely reflectshe shape of a hemisphere. The shape distortion seen inhe depth map of Fig. 16(a) recovered from the blurred ob-ervations by using the standard PS method clearly indi-ates the loss of depth details.

Finally, we consider the experiments with real data forlind restoration and structure estimation, for which wese the same object, Jodu, captured with eight different

ight-source positions. However, no attempt was made toring the object into focus, and hence the observations arelightly blurred. As is common in real-aperture imaging,he blur due to defocus is modeled by a Gaussian PSFith variance s2. The blurred observations are then used

o derive the fields ps0d, qs0d, and the rs0d, which are useds the initial estimates for our algorithm, as discussed inubsection 4.D. Again an initial value s=0.6 and a maskize 13313 (which is the same as that used in the previ-

ig. 14. (a) Synthesized image of a checkerboard-texturedpherical surface for a light-source position sps=0.45,qs=0.80d.b) Simulated observation obtained by using a Gaussian blur s1 and additive noise corresponding to the figure in (a).

ig. 15. Restored checkerboard image obtained by using theroposed technique.

ig. 16. Recovered depth map obtained by using (a) the stan-ard PS method and (b) the proposed method.

d

us experiment) were used to estimate the different fieldsnd to restore the images iteratively. After every 100 it-rations in gradient-descent operation to estimate theurface normals, the new value of s is calculated and issed again to refine the fields and the images. The algo-ithm is terminated when no further improvement in thestimate of s is obtained. The final estimated s for theiven example was found to be 1.0577.

The results of the experiment are illustrated in the fol-owing figures. A blurred observation is depicted in Fig.7(a). The restored Jodu image for the same observations displayed in Fig. 17(b). As can be seen, the restored im-ge looks sharper: Observe the nose, mouth and tongue

ig. 17. (a) Observed image of Jodu with an arbitrary cameraefocus for the same light-source position used for the imagehown in Fig. 4(a). (b) Restored Jodu image obtained by using theroposed method.

ig. 18. (a) Restored Jodu image obtained by using a standardATLAB blind deconvolution tool. (b) Recovered albedo obtained

y using the proposed method.

ig. 19. Recovered depth map obtained by using (a) the stan-

ard PS method and (b) the proposed method.
Page 10: Joint blind restoration and surface recovery in photometric stereo

rrivgktcSse

ipduatppmpiSad

6Wobmtwsbibrtmmt

traiOtabadi

APj

rm

R

by us

M. V. Joshi and S. Chaudhuri Vol. 22, No. 6 /June 2005/J. Opt. Soc. Am. A 1075

egions. The blur that is clearly visible in Fig. 17(a) is wellemoved in Fig. 17(b). The restored image through blindmage deconvolution (using the MATLAB function “decon-blind”) is shown in Fig. 18(a). For the MATLAB pro-ram, the PSF mask size and the initial value of s wereept the same as was used in the proposed approach, andhe number of iterations were again chosen as 100. As wean see from the figure, the restoration is quite poor.imilar conclusions about the efficacy of the proposedcheme can be drawn from Fig. 18(b) for the albedo recov-ry, where the details can be seen clearly.

Although it is difficult to distinguish much perceptualmprovement in the estimated depth map by using theroposed technique from that obtained by using a stan-ard PS method on blurred observations because of these of gray levels to encode the depth map [see Figs. 19(a)nd 19(b)], there is definite perceptual difference whenhe depth is viewed as mesh plots. The correspondinglots are displayed in Fig. 20. As can be seen, the meshlot shown in Fig. 20(a) corresponding to the standard PSethod is smoother than the plot for the proposed ap-

roach [see Fig. 20(b)]. There was also noticeable changen the estimated p and q fields with these two methods.ince it is difficult to visualize the p and q fields for anrbitrary-shaped object, we display only the recoveredepth maps.

. CONCLUSIONSe have described a method for simultaneous estimation

f scene structure and blind image restoration fromlurred photometric observations. The structural infor-ation is embedded within the observations, and,

hrough the unified framework that we have described,e were able to recover the restored images as well as the

tructure. We modeled the surface gradients and the al-edo as separate MRF fields and used a suitable regular-zation scheme to estimate the different fields and thelur parameter alternately. No problem was encounteredegarding the convergence of the proposed method duringhe experimentation. Since we are using an iterativeethod for alternate estimation of parameters, theethod is not suitable for real-time application. However,

he computational demands are not very pressing, and it

Fig. 20. Recovered depth map shown as a mesh plot obtained

akes scarcely a few tens of seconds on a P-IV machine toestore the image when we use eight observations. Ourpproach uses known source positions while capturingmages of the object with different light-source positions.ur continuing work involves estimating the source direc-

ions simultaneously while obtaining the restored imagend the structure. Further, we have assumed the blur toe constant (shift invariant) in the current study. In real-perture images, the blur is often shift varying owing toepth variation in the scene. We are currently investigat-ng the issue of recovering the shift-varying blur kernel.

CKNOWLEDGMENTartial funding support from the DST-sponsored Swarna-

ayanti project is gratefully acknowledged.

Corresponding author Subhasis Chaudhuri can beeached at the address on the title page or by e-mail,[email protected].

EFERENCES1. B. K. P. Horn, “Shape from shading: a method for obtaining

the shape of a smooth opaque object from one view,” Ph.D.thesis (Massachussetts Institute of Technology, Cambridge,Mass., 1970).

2. K. Ikeuchi and B. K. P. Horn, “Numerical shape fromshading and occluding boundaries,” Artif. Intell. 17,141–184 (1981).

3. A. P. Pentland, “Local shading analysis,” IEEE Trans.Pattern Anal. Mach. Intell. 6, 170–187 (1984).

4. A. P. Pentland, “Shape information from shading: a theoryabout human perception,” in Proceedings of theInternational Conference on Computer Vision (IEEEComputer Society Press, Los Alamitos, Calif., 1988), pp.404–413.

5. P. S. Tsai and M. Shah, “Shape from shading using linearapproximation,” Image Vis. Comput. 12, 487–498 (1994).

6. R. J. Woodham, “Reflectance map techniques for analyzingsurface defects in metal castings,” Tech. Rep. 457 (MITArtificial Intelligence Laboratory, Cambridge, Mass., 1978).

7. R. J. Woodham, “Photometric method for determiningsurface orientation from multiple images,” Opt. Eng.(Bellingham) 19, 139–144 (1980).

8. K. Ikeuchi, “Determining surface orientations of specularsurfaces by using the photometric stereo method,” IEEETrans. Pattern Anal. Mach. Intell. 3, 661–669 (1981).

9. W. M. Silver, “Determining shape and reflectance using

ing (a) the standard PS method and (b) the proposed method.

Page 11: Joint blind restoration and surface recovery in photometric stereo

1

1

1

1

1

1

1

1

1

1

2

2

2

2

2

2

2

2

2

2

3

3

3

3

3

3

3

3

3

3

4

4

4

4

4

4

4

4

4

1076 J. Opt. Soc. Am. A/Vol. 22, No. 6 /June 2005 M. V. Joshi and S. Chaudhuri

multiple images,” M.S. thesis (Massachusetts Institute ofTechnology, Cambridge, Mass., 1980).

0. K. M. Lee and C. C. Jay Kuo, “Shape reconstruction fromphotometric stereo,” in Proceedings of the InternationalConference on Computer Vision and Pattern Recognition(IEEE Computer Society Press, Los Alamitos, Calif., 1992),pp. 479–484.

1. C. Y. Chen, R. Klette, and R. Kakarala, “Albedo recoveryusing a photometric stereo approach,” in Proceedings of theInternational Conference on Pattern Recognition (IEEEComputer Society Press, Los Alamitos, Calif., 2002), pp.700–703.

2. Y. Iwahori, R. J. Woodham, M. Ozaki, H. Tanaka, and N.Ishi, “Neural network based photometric stereo with anearby rotational moving light source,” IEICE Trans. Inf.Syst. E-80-D, 948–957 (1997).

3. Y. Iwahori, R. J. Woodhan, and A. Bagheri, “Principalcomponent analysis and neural network implementation ofPhotometric Stereo,” in Proceedings of the Workshop onPhysics based Modeling in Computer Vision (IEEEComputer Society Press, Los Alamitos, Calif., 1995), pp.117–125.

4. R. J. Woodham, “Gradient and curvature from thephotometric stereo method including local confidenceestimation,” J. Opt. Soc. Am. A 11, 3050–3068 (1994).

5. Y. Iwahori, R. J. Woodham, Y. Watanabe, and A. Iwata,“Self calibration and neural network implementation ofphotometric stereo,” in Proceedings of the InternationalConference on Pattern Recognition (IEEE Computer SocietyPress, Los Alamitos, Calif., 2002), pp. 359–362.

6. O. Drbohlav and R. Sara, “Unambiguous determination ofshape from photometric stereo with unknown lightsources,” in Proceedings of the International Conference onComputer Vision (IEEE Computer Society Press, LosAlamitos, Calif., 2001), pp. 581–586.

7. R. Basri and D. Jacobs, “Photometric stereo with generalunknown lighting,” in Proceedings of the InternationalConference on Computer Vision and Pattern Recognition(IEEE Computer Society Press, Los Alamitos, Calif., 2001),pp. 374–381.

8. U. Sakarya and I. Erkmen, “An improved method ofphotometric stereo using local shape from shading,” ImageVis. Comput. 21, 941–954 (2003).

9. J. J. Clark, “Active photometric stereo,” in Proceedings ofthe International Conference on Computer Vision andPattern Recognition (IEEE Computer Society Press, LosAlamitos, Calif., 1992), pp. 29–34.

0. J. R. A. Torreao, “A new approach to photometric stereo,”Pattern Recogn. Lett.20, 535–540 (1999).

1. G. McGunnigle and M. J. Chantler, “Rotation invariantclassification of rough surfaces,” IEE Proc. Vision ImageSignal Process. 146, 345–352 (1999).

2. G. McGunnigle and M. J. Chantler, “Rough surfaceclassification using point statistics from photometricstereo,” Protein Eng. 21, 593–604 (2000).

3. G. McGunnigle and M. J. Chantler, “Modelling depositionof surface texture,” Electron. Lett. 37, 749–750 (2001).

4. M. L. Smith, T. Hill, and G. Smith, “Gradient spaceanalysis of surface defects using a photometric stereoderived bump map,” Image Vis. Comput. 17, 321–332(1999).

5. P. Hansson and P. Johansson, “Topography and reflectanceanalysis of paper surfaces using photometric stereomethod,” Opt. Eng. (Bellingham) 39, 2555–2561 (2000).

6. G. McGunnigle and M. J. Chantler, “Segmentation ofmachined surfaces,” presented at the Irish Machine Visionand Image Processing Conference, Maynooth, Ireland,September, 2001.

7. M. R. Banham and A. K. Katsaggelos, “Digital imagerestoration,” IEEE Signal Process. Mag. Vol. 14 (2) 1997,pp. 24–41.

8. D. Rajan and S. Chaudhuri, “Simultaneous estimation of

super-resolved scene and depth map from low resolutiondefocused observations,” IEEE Trans. Pattern Anal. Mach.Intell. 25, 1102–1117 (2003).

9. A. N. Rajagopalan and S. Chaudhuri, “An MRF modelbased approach to simultaneous recovery of depth andrestoration from defocused images,” IEEE Trans. PatternAnal. Mach. Intell. 21, 577–589 (1999).

0. D. Kundur and D. Hatzinakos, “Blind imagedeconvolution,” IEEE Signal Process. Mag. March 1996, pp.43–64.

1. Y. You and M. Kaveh, “A regularization approach to jointblind identification and image restoration,” IEEE Trans.Image Process. 5, 416–428 (1996).

2. R. L. Lagendijk and J. Biemond, Iterative Identificationand Restoration of Images, (Kluwer Academic, Boston,Mass., 2001).

3. R. L. Lagendijk, J. Biemond, and D. E. Boekee,“Identification and restoration of noisy blurred imagesusing the expectation maximization algorithm,” IEEETrans. Acoust., Speech, Signal Process. 38, 1180–1191(1990).

4. K. T. Lay and A. K. Katsaggelos, “Image identification andrestoration based on the expectation maximizationalgorithm,” Opt. Eng. (Bellingham) 29, 436–445 (1990).

5. M. K. Ozkan, A. M. Tekalp, and M. I. Sezan, “POCS basedrestoration of space varying blurred images,” IEEE Trans.Image Process. 3, 450–454 (1995).

6. M. I. Sezan and A. M. Tekalp, “Iterative image restorationwith ringing suppression using POCS,” in Proceedings ofthe IEEE International Conference on Acoustics, Speechand Signal Processing (IEEE Press, Piscataway, N.J.,1988), pp. 1300–1303.

7. M. I. Sezan and H. J. Trussell, “Prototype imageconstraints for set theoretic image restoration,” IEEETrans. Signal Process. 39, 2275–2285 (1991).

8. H. J. Trussell and M. R. Civanlar, “The feasible solution insignal restoration,” IEEE Trans. Acoust., Speech, SignalProcess. 32, 201–212 (1984).

9. Y. Yang, N. P. Galatsanos, and A. K. Katsaggelos,“Projection-based spatially adaptive reconstruction ofblock-transform compressed images,” IEEE Trans. ImageProcess. 4, 896–908 (1995).

0. V. F. Candela, A. Marquina, and S. Serna, “A local spectralinversion of a linearized TV model for denoising anddeblurring,” IEEE Trans. Image Process. 12, 808–816(2003).

1. B. Besserer, L. Joyeux, S. Boukir, and O. Buisson,“Reconstruction of degraded image sequences. Applicationto film restoration,” Image Vis. Comput. 19, 503–516(2001).

2. L. Bedini, A. Tonazzini, and S. Minutoli, “Unsupervisededge-preserving image restoration via a saddle pointapproximation,” Image Vis. Comput. 17, 779–793 (1999).

3. B. K. P. Horn, Robot Vision (MIT Press, Cambridge, Mass.,1986).

4. S. Chaudhuri and A. N. Rajagopalan, Depth From Defocus:A Real Aperture Imaging Approach (Springer-Verlag, NewYork, 1999).

5. R. C. Dubes and A. K. Jain, “Random field models in imageanalysis,” J. Appl. Statistics 16, 131–164 (1989).

6. J. Besag, “Spatial interaction and the statistical analysis oflattice systems,” J. R. Stat. Soc. Ser. B. Methodol.36,192–236 (1974).

7. M. Subbarao, “Parallel depth recovery by changing cameraparameters,” in Proceedings of the International Conferenceon Computer Vision (IEEE Computer Society Press, LosAlamitos, Calif., 1998), pp. 149–155.

8. T. F. Chan and C. K. Wong, “Convergence of the alternatingminimization algorithm for blind deconvolution,” Tech.Rep. 19, UCLA Computational and Applied MathematicsSeries (University of California, Los Angeles, Calif., 1999).