digital shallow depth-of-field adaptercg.postech.ac.kr/papers/digital_shallow_dof.pdf ·...

8
Digital Shallow Depth-of-Field Adapter Kyuman Jeong , Dongyeon Kim , Soon-Yong Park , Seungyong Lee Department of Computer Science & Engineering, POSTECH, Pohang, Korea Computer Engineering Department, Kyungpook National University, Daegu, Korea Abstract In this paper, we propose a new method to synthesize shallow depth-of-field images from two input images with different aper- ture values. The basic approach is to estimate the depth map of a given scene using a DFD (depth-from-defocus) algorithm and blur an input image according to the estimated depth map. The depth information estimated by DFD contains much noise and er- ror, while the estimation is rather accurate only along the edges of the image. To overcome this limitation, we propose a depth map filling algorithm using a set of initial depth maps and a segmented image. Once the depth map filling has been completed, the depth map can be fine-tuned by applying segment clustering and user interaction. Since our method blurs an input image according to the estimated depth information, it generates physically plausible result images with shallow depth-of-field. Keywords: depth-of-field, depth-from-defocus, image blurring 1 Introduction In photography, a depth-of-field is the range of distance from the camera within a scene for which the blurring is tolerable in the image [19, 25]. If all objects are in-focus like Fig. 1(a), then we say that the depth-of-field is deep. In shallow depth-of-field images, in contrast, only the main objects are in-focus while the background objects are blurred as in Fig. 1(c). Images with shallow depth-of-field (DOF) make viewers pay more attention to the main objects. The viewer’s attention is immediately drawn to the sharply focused main objects while the background is intentionally blurred out. This technique has been widely used for more than 150 years in photography [18]. However, it is difficult to obtain shallow depth-of-field images using low end digital cameras. To obtain shallow depth-of- field images, high end (usually very expensive) camera lenses and camera bodies are needed. In this research, we propose a new method to synthesize images of shallow depth-of-field using images of deep depth-of-field taken by point & shoot digital cameras or low end digital SLR cameras. A shallow depth-of-field adapter is used in digital camcorders to achieve shallow depth-of-field effect [2]. Shallow depth-of- field effect is difficult to achieve with a digital camcorder because the CCD size is relatively small compared to a 35mm film or digital camera. Refer to [5, 6] for more information on the adapter. Our research is originally motivated by the adapter. The main difference is that our technique provides a software solution while the adapter is an optical device. The main contribution of this paper is to present a new method for synthesizing images of shallow depth-of-field from the in- put images of relatively deep depth-of-field. The synthesized im- ages are physically plausible because the amount of blur differs according to the distance. We also propose a new depth filling method to enhance the quality of the depth map estimated by a depth-from-defocus (DFD) method [26]. 2 Related Work Creating extended deep depth-of-field images has been a chal- lenging problem [28, 13, 14, 9]. Several commercial softwares and freewares are available, such as Helicon Focus [4] and Com- bineZ5 [1]. The input of the softwares is a stack of images of relatively shallow depth-of-fields with various focuses. In the in- put images, in-focus regions are found and merged together to obtain an image with all in-focus. This technique is crucial, es- pecially in macro photography, where depth-of-field is extremely shallow. On the contrary, this paper deals with the reverse prob- lem to obtain shallow depth-of-field images, where we narrow down the depth-of-field of input images. It is very difficult to obtain shallow depth-of-field images using point & shoot digital cameras or low end digital SLR cameras. Therefore, some photographers buy more expensive equipments whereas others use image editing tools to fabricate photographs. Tutorials and articles can be found on the internet for creating shallow depth-of-field images from a pan focus image [7, 8, 16, 12]. The basic approach is to select foreground objects with a Selection Tool and then blur out the background only. Fig. 1 shows an example. The main drawback is that the approach blurs out the background regardless of the depth information. Consequently, the result image looks artificial because the amount of blurring is the same even where the real depth varies a lot. PhotoShop has a specialized plug-in named Depth of Field Generator Pro [3], which generates shallow depth-of-field im- ages. The basic idea and interface are similar to other methods for synthesizing shallow depth-of-field images using image edit- ing tools. The quality of result images is mainly influenced by the accuracy of the depth map manually created by a user. In computational photography [23, 24], techniques for con- trolling depth-of-field and digital refocusing have been proposed [9, 20, 21]. The goal of the techniques is similar to ours in that

Upload: others

Post on 06-Jun-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Digital Shallow Depth-of-Field Adaptercg.postech.ac.kr/papers/digital_shallow_dof.pdf · depth-from-defocus (DFD) method [26]. 2 Related Work Creating extended deep depth-of-field

Digital Shallow Depth-of-Field Adapter

Kyuman Jeong†, Dongyeon Kim†, Soon-Yong Park‡, Seungyong Lee†

†Department of Computer Science & Engineering, POSTECH, Pohang, Korea‡Computer Engineering Department, Kyungpook National University, Daegu, Korea

Abstract

In this paper, we propose a new method to synthesize shallowdepth-of-field images from two input images with different aper-ture values. The basic approach is to estimate the depth map ofa given scene using a DFD (depth-from-defocus) algorithm andblur an input image according to the estimated depth map. Thedepth information estimated by DFD contains much noise and er-ror, while the estimation is rather accurate only along the edges ofthe image. To overcome this limitation, we propose a depth mapfilling algorithm using a set of initial depth maps and a segmentedimage. Once the depth map filling has been completed, the depthmap can be fine-tuned by applying segment clustering and userinteraction. Since our method blurs an input image according tothe estimated depth information, it generates physically plausibleresult images with shallow depth-of-field.

Keywords: depth-of-field, depth-from-defocus, image blurring

1 Introduction

In photography, a depth-of-field is the range of distance from thecamera within a scene for which the blurring is tolerable in theimage [19, 25]. If all objects are in-focus like Fig. 1(a), thenwe say that the depth-of-field is deep. In shallow depth-of-fieldimages, in contrast, only the main objects are in-focus while thebackground objects are blurred as in Fig. 1(c).

Images with shallow depth-of-field (DOF) make viewers paymore attention to the main objects. The viewer’s attention isimmediately drawn to the sharply focused main objects while thebackground is intentionally blurred out. This technique has beenwidely used for more than 150 years in photography [18].

However, it is difficult to obtain shallow depth-of-field imagesusing low end digital cameras. To obtain shallow depth-of-field images, high end (usually very expensive) camera lensesand camera bodies are needed. In this research, we propose anew method to synthesize images of shallow depth-of-field usingimages of deep depth-of-field taken by point & shoot digitalcameras or low end digital SLR cameras.

A shallow depth-of-field adapter is used in digital camcordersto achieve shallow depth-of-field effect [2]. Shallow depth-of-field effect is difficult to achieve with a digital camcorder becausethe CCD size is relatively small compared to a 35mm film ordigital camera. Refer to [5, 6] for more information on theadapter. Our research is originally motivated by the adapter. The

main difference is that our technique provides a software solutionwhile the adapter is an optical device.

The main contribution of this paper is to present a new methodfor synthesizing images of shallow depth-of-field from the in-put images of relatively deep depth-of-field. The synthesized im-ages are physically plausible because the amount of blur differsaccording to the distance. We also propose a new depth fillingmethod to enhance the quality of the depth map estimated by adepth-from-defocus (DFD) method [26].

2 Related WorkCreating extended deep depth-of-field images has been a chal-lenging problem [28, 13, 14, 9]. Several commercial softwaresand freewares are available, such as Helicon Focus [4] and Com-bineZ5 [1]. The input of the softwares is a stack of images ofrelatively shallow depth-of-fields with various focuses. In the in-put images, in-focus regions are found and merged together toobtain an image with all in-focus. This technique is crucial, es-pecially in macro photography, where depth-of-field is extremelyshallow. On the contrary, this paper deals with the reverse prob-lem to obtain shallow depth-of-field images, where we narrowdown the depth-of-field of input images.

It is very difficult to obtain shallow depth-of-field imagesusing point & shoot digital cameras or low end digital SLRcameras. Therefore, some photographers buy more expensiveequipments whereas others use image editing tools to fabricatephotographs. Tutorials and articles can be found on the internetfor creating shallow depth-of-field images from a pan focusimage [7, 8, 16, 12]. The basic approach is to select foregroundobjects with a Selection Tool and then blur out the backgroundonly. Fig. 1 shows an example. The main drawback is thatthe approach blurs out the background regardless of the depthinformation. Consequently, the result image looks artificialbecause the amount of blurring is the same even where the realdepth varies a lot.

PhotoShop has a specialized plug-in named Depth of FieldGenerator Pro [3], which generates shallow depth-of-field im-ages. The basic idea and interface are similar to other methodsfor synthesizing shallow depth-of-field images using image edit-ing tools. The quality of result images is mainly influenced bythe accuracy of the depth map manually created by a user.

In computational photography [23, 24], techniques for con-trolling depth-of-field and digital refocusing have been proposed[9, 20, 21]. The goal of the techniques is similar to ours in that

Page 2: Digital Shallow Depth-of-Field Adaptercg.postech.ac.kr/papers/digital_shallow_dof.pdf · depth-from-defocus (DFD) method [26]. 2 Related Work Creating extended deep depth-of-field

(a) input image (b) foreground selection (c) shallow depth-of-field result

Figure 1: Creation of a shallow depth-of-field image using Paint Shop Pro [16].

images from digital cameras are processed to synthesize new im-ages with controlled depth-of-field. However, the techniques re-quire several or many input images to resample a new imagewhile our method needs only two images and uses estimateddepth information to generate physically plausible images.

To obtain physically correct shallow depth-of-field images, theinput image must be blurred according to the distance from theviewer. Hence, depth estimation from input images is one of thekey components of our approach. In general, it is difficult toestimate depth information using one input image. Depth-from-defocus is a technique for estimating the depth information of ascene from a set of images [22, 26, 31]. The basic idea of depth-from-defocus is that there is a direct relationship among thedepths, camera parameters, and the amounts of blur in images.From the relationship, we can derive the depth information bymeasuring the amount of blurring when the camera parametersare fixed.

3 Depth-of-Field in PhotographyIn optics, it is known that a collimated beam traveling through aconverging lens axis will focus at a distance f , called the focallength [19, 25]. The plane perpendicular to the lens axis placedat the distance f from the lens is called the focal plane. Let S1

and S2 be the distances from the lens to the object and from thelens to the image plane, respectively, as shown in Fig. 2. Giventhat the thickness of a lens is negligible, we can derive Eq. (1) bythe thin lens formula [19, 25],

1S1

+1S2

=1f

. (1)

Figure 2: Image forming process.

If the distance to an object deviates from S1, then the objectdoes not form a clear image because a point in the object forms acircle in the image plane, as shown in Fig. 3. We call this circle acircle of confusion (also known as a disk of confusion and a blurcircle). In optics, a circle of confusion is defined as an opticalspot formed by a cone of light rays from a lens not coming to aperfect focus [19, 25]. Fig. 3 demonstrates that the diameter ofa circle of confusion increases as the distance to the object S1

increases. Therefore, the diameter of a circle of confusion canbe used to measure the amount of blurring in the image. Theblurring model used in Section 6 was originally derived from thedefinition of a circle of confusion.

Figure 3: The circle of confusion diameter is c when the objectlies at distance Sc.

Due to the circle of confusion, some objects form clear imageswhile others are blurred when projected onto the image plane. Adepth of field is the range of the distance from the lens where thesize of the circle of confusion is less than the resolution of thehuman eye. From the lens formula in Eq. (1), we can derive thedepth-of-field formula [19],

DoF = dF − dN , (2)

dF =s

1− ac(s− f)/f2,

dN =s

1 + ac(s− f)/f2,

where a is the aperture value, s is the distance of an object fromthe lens, and c is the maximum permissible circle of confusion.From Eq. (3), we find that two factors of a lens control thedepth-of-field, i.e., the focal length and the aperture value. Toobtain shallow depth-of-field images, photographers use brighttelephoto (large aperture value and long focal length) lenses.In photography, instead of the aperture value a, the F value isusually used, which is defined as f/a [19, 25].

Page 3: Digital Shallow Depth-of-Field Adaptercg.postech.ac.kr/papers/digital_shallow_dof.pdf · depth-from-defocus (DFD) method [26]. 2 Related Work Creating extended deep depth-of-field

Among the parameters in Eq. (3), f , a, and c are intrinsicparameters of the camera body and lens. When the threeparameters are fixed in a given image, the only parameter variablein the image is the distance s of an object from the lens. Hence,from Fig. 3 and Eq. (3), we can see that the distance of a surfacecan be calculated analytically using the amount of blur. Sinceit is difficult to measure the amount of blur in one image, twoor more images are used in depth-from-defocus algorithms fordepth estimation [26, 27, 29, 31, 17].

4 Overall ProcessThe goal of this research is to synthesize physically plausibleimages with shallow depth-of-field from images taken by a point& shoot camera or a low end digital SLR camera. Based onthe theory in Section 3, our approach for achieving the goalconsists of two parts. First, we estimate the distances of surfacesfrom the camera in an image to obtain a depth map (Section 5).Second, shallow depth-of-field images are generated by applyingan image blurring model to the input image with the depth map(Section 6).

Fig. 4 shows the overall process of our system. The input ofthe system is two images, which are photographed at the samelocation with different aperture values. Since the difference ofimage blur is used for depth estimation, two input images withas much blur difference as possible are needed to obtain accuratedepth information. We recommend to take the first image (I1)with the maximum F value of the camera and the second image(I2) with the minimum F value. In order to take these kindsof images, we can use the aperture-priority mode (A Mode) andmanual mode (M Mode) of a camera.

We use a DFD algorithm to obtain the depth informationfrom the blur difference among the input images. However, ingeneral, DFD algorithms do not work properly in the regionswhere the blur difference is not noticeable [22, 26, 31]. Inaddition, estimated depths are more accurate along the edgesof objects while the depth estimation tends to fail in the objectinteriors [10]. To resolve these limitations, we propose a depthmap filling algorithm which uses a segmented image and severaldepth estimations with different resolutions. The quality of thefinal depth map can be enhanced by applying segment clusteringand user interaction.

Finally, a result image with shallow depth-of-field is synthe-sized by blurring the input image according to the estimateddepth information. The result image is physically plausible be-cause the amounts of image blurs are controlled by the estimateddepths. We will explain each step in more detail in the followingsections.

5 Depth Map Construction

5.1 Initial depth map constructionIn this paper, we use the DFD technique proposed by Subbaraoet al. [26] because it involves simple local operations and canbe easily implemented. Their technique calculates the depthsby processing image defocus information with spatial-domain

Figure 4: Flowchart of our system.

convolution/deconvolution transforms. The technique can handletwo images photographed with different camera settings. In oursystem, we use two images taken with different aperture values,as shown in Fig. 5.

(a) image with F value of 32 (b) image with F value of 2.8

Figure 5: Two input images taken with different aperture values.

Fig. 6 shows examples of estimated depth maps. The depthvalues estimated using the DFD technique [26] are originallyabsolute distances from the camera in real world. We normalizethe absolute distances to relative depths between 0 and 1 forfurther processing. In an ideal case, white pixels in the depth maphave the value of zero which represents the distance to focusedobjects. The values of black pixels are one which correspondto the distance to the farthest objects. Gray pixels represent thedistances to the in-between objects. The relative depth valuesare converted back to the absolute distances in the blurring step(Section 6).

Unfortunately, since the depth estimation from DFD maycontain errors, a pixel in the depth map is assigned almost white

Page 4: Digital Shallow Depth-of-Field Adaptercg.postech.ac.kr/papers/digital_shallow_dof.pdf · depth-from-defocus (DFD) method [26]. 2 Related Work Creating extended deep depth-of-field

in two different cases. In the first case, the pixel representsthe depth of a foreground object in the scene. In this case, theassigned depth value is an accurate one. The white pixels onthe first PC set in Fig. 6(a) belong to this case. Second, whenthe contrast variations of pixels are negligible, depth estimationbecomes inaccurate and almost white depth values are assignedto the pixels. The white pixels on the second, third, and fourthPC sets in Fig. 6(a) correspond to this case. Consequently, in adepth map, the values around the edges of objects in the imageare more accurate than in object interiors where the pixel valuesare almost constant.

Since the DFD technique [26] uses convolution/deconvolutionfilters, the accuracy of the estimated depth map also depends onthe filter size. The effect of the filter size on the depth map isdemonstrated in Fig. 6. When a small filter size is used, thesizes of the regions around object edges where we have betterdepth estimation are reduced but the accuracy of the depth valuesincreases. On the contrary, when the filter size is big, a betterestimation around the edges covers larger regions but the depthaccuracy in the regions decreases.

(a) 8×8 filter size (b) 16×16 filter size

(c) 24×24 filter size (b) 32×32 filter size

Figure 6: Initial depth maps with different filter sizes.

5.2 Input image segmentation

To improve the quality of the estimated depth map, especially forthe object interiors, we use image segmentation. The underlyingidea is that an object part or a surface captured by imagesegmentation probably has an almost constant depth. Since wehave better depth estimation around object edges, we can usethe depth information at the segment boundary to determine thesingle depth of a segment which can also be used for the interiorof the segment.

Among the two input images I1 and I2, we use I1 for imagesegmentation, which has a deeper depth-of-field. Image I1

is better suited for image segmentation than I2 because thebackground objects have been blurred in I2. We can also usethe initial depth map to help image segmentation. However,the depth map contains much noise and would unnecessarilygenerate much over-segmentation. Hence, we simply use the

input image I1 for image segmentation.We obtain image segmentation by applying mean-shift cluster-

ing [11] to I1. To reduce noise in I1, we perform median filtering[30] before mean-shift clustering. Fig. 7(a) shows the result ofmedian filtering and Fig. 7(b) shows the result of image segmen-tation.

(a) result of median filtering (b) segmentation result

Figure 7: Input image segmentation.

5.3 Depth map fillingFor a segment in image I1, we can determine the boundaryregion which has accurate depth information from the filter sizeused in the DFD technique. We found that the depth estimationis usually accurate in the region where the distance from thesegment boundary is within a half of the filter size. We callsuch a boundary region of a segment a filled region while theother interior regions of segments are called unfilled regions. InFig. 8, black pixels represent the segment boundaries and graypixels represent the filled regions. White pixels represent unfilledregions, where the depth estimation is inaccurate.

Figure 8: Regions with accurate depth estimation around seg-ment boundaries when the DFD result with the filter size of16×16 in Fig. 6(b) is applied to the segmented image in Fig.7 (b).

When we determine a single depth for each segment in animage, we can simply ignore the depth information estimated forunfilled regions. However, as we can see in Fig. 8, an unfilledregion may cover the majority of a segment and consequently, thedepth values in a filled region may not be sufficient to calculatethe representative depth for a segment. To resolve this problem,we propose a depth filling technique for unfilled regions, whichconsists of two steps.

In the first step, we obtain accurate depth values for the filledregions using a small filter size for the DFD technique. We

Page 5: Digital Shallow Depth-of-Field Adaptercg.postech.ac.kr/papers/digital_shallow_dof.pdf · depth-from-defocus (DFD) method [26]. 2 Related Work Creating extended deep depth-of-field

usually use a 8×8 or 16×16 filter, which produced good resultsin our experiments. In the second step, the depth values in theunfilled regions are filled using the depth information estimatedwith larger filter sizes. When the filter size increases, thefilled regions become larger and contain the previously unfilledregions. Although the depth estimation is less accurate with alarger filter size, the depth values for a region becomes moreaccurate if the region is converted from unfilled to filled withthe filter size change. Based on this observation, we average thedepth values from larger filter sizes to determine the depth valuesof unfilled regions. Since we used a 16×16 filter for the depthmap in Fig. 8, three depth maps with 24×24, 32×32, and 48×48filters were used for depth map filling.

Once the depth filling for unfilled regions has been conducted,all pixels in a segment are assigned with the same new depthobtained by averaging all depth values in the segment. Thisupdated depth map has more meaningful depth information withless noise than the initial depth map from the DFD technique.

5.4 Segment clustering and depth map editing

As shown in Fig. 7(b), in image segmentation for depth map up-date, we use over-segmentation because achieving an exact seg-mentation of an image into separate objects and surfaces is diffi-cult. Consequently, an object in an image can be divided into sev-eral segments with similar depths. To remove unnecessary vari-ations of depths in an image, we cluster neighboring segmentsif the difference of the depth values is less than a given thresh-old. For segment clustering, we use the face clustering techniqueproposed by Garland et al. [15].

Although we have enhanced the accuracy of the initial depthmap, the updated depth values can still be inaccurate, which canbe corrected by user interaction in the final step. With the userinteraction, we can cluster several segments into one when theyshould be merged together. If a segment needs to be split, thenwe can partition a segment into two. When the depth value of asegment is inaccurate, we can change it to a different value.

The depth filling in Section 5.3 usually generates a reasonabledepth map and the number of segments is reduced by the segmentclustering. Hence, a little user interaction to touch a fewsegments was needed in our experiments. Fig. 9 shows the finaldepth map obtained by depth filling and user interaction.

Figure 9: Final depth map.

6 Image BlurringAccording to Fig. 3, an image can be considered as a resultof convolution of an all-in-focus image with a point spreadfunction g(x, y, σ(x, y)), where σ(x, y) is the blur parameter ata pixel [22, 26]. In most cases, the point spread function can beapproximated by the Gaussian function

gσ(x, y) =1

πσ2e−

x2+y2

σ2 . (3)

The relationship among the lens parameters, depth, and blurparameter has been studied in optics [19, 25]. In this paper, weuse the formula derived by Pentland [22],

D =fv0

v0 − f − σkF, (4)

where D is the depth, f is the focal length, F is the F -numberof the lens, v0 is the distance between the lens and image plane,σ is the blur parameter, and k is the proportionality coefficientbetween the blur circle radius and σ. From Eq. (4), we canexpress σ as

σ =v0 − f

kF− fv0/kF

D. (5)

In Eq. (5), the parameters except for the depth D are intrinsicparameters of the camera and lens. Given that the cameraparameters are known, we can use the depth D to determine theblur parameter σ, which can be used for the point spread functionin Eq. (3). By performing image blurring at each pixel withthe point spread function obtained from the depth map, we cangenerate the result image with shallow depth-of-field. We applythe image blurring to the input image I1, which has a deeperdepth-of-field and thus more similar to an all-in-focus image thanI2.

Our final depth map contains relative depth values while abso-lute distances are needed in Eq. (5). When we take photographs,we roughly measure the absolute distances of two objects fromthe camera, a foreground main object and the farthest backgroundobject. By interpolating the absolute distances, relative depth val-ues in the depth map can be converted to absolute distances.

Figs. 10 and 11 show image blurring results. Fig. 10 mimicsthe blurring effect of F value 1.0 while Fig. 11 is for F value 4.0.Note that a F value is inversely proportional to the diameter oflens. The smaller the F value becomes, the more the image bluroccurs. In the images, the amount of blur is different, especiallyin the window blind.

7 Experimental ResultsFor the experiments, we have used a digital SLR camera, CanonEOS-20D. The lenses used are Canon EF28-70 2.8L and CanonEF70-200 2.8L IS. The inputs of Fig. 5 and Fig. 12 are takenby Canon EF70-200 2.8L IS and the inputs of Fig. 13 are fromCanon EF28-70 2.8L. The minimum F values of both lensesare 2.8. The maximum F values of Canon EF28-70 and CanonEF70-200 2.8L IS are 22 and 32, respectively. All input images

Page 6: Digital Shallow Depth-of-Field Adaptercg.postech.ac.kr/papers/digital_shallow_dof.pdf · depth-from-defocus (DFD) method [26]. 2 Related Work Creating extended deep depth-of-field

Figure 10: Image blurring result. Target F value is 1.0.

Figure 11: Image blurring result. Target F value is 4.0.

used in this paper were taken with the maximum or minimum Fvalues of the lenses.

We have experimented with our system in different situations.Fig. 12 shows an example which is applied to a portrait. TheF values for Figs. 12(a) and 12(b) are 32 and 2.8, respectively.From this input pair, we generated two images with aperturevalues of 1.8 and 1.0. Since the smallest F value of Canon EF70-200 2.8L IS is 2.8, it is impossible to take photographs with Fvalue of 1.0 and 1.8 using the lens. With our system, we cansynthesize shallow depth-of-field images with those F values, asshown in Fig. 12(c) and 12(d).

Fig. 13 shows an example of indoor environment. Thisexample verifies that our algorithm also works properly in ancomplex environment.

8 ConclusionIn this paper, we have proposed a new method to synthesize shal-low depth-of-field images from two input images with differentaperture values. To obtain physically plausible result images, weestimate the depth information of the surfaces in the image usinga DFD technique. Shallow depth-of-field images are obtained bycontrolling the amount of blurring for image pixels according tothe depth information.

One avenue for future work is to improve the DFD techniqueto obtain a better initial depth map because the accuracy of thedepth map affects the quality of result images. In our approach,we assume that real lenses obey the thin lens formula. This

assumption, however, is not physically correct. Exploiting thecharacteristics of real lenses to simulate physically correct imageblurring can be another direction for further exploration.

AcknowledgmentsThis research was supported in part by the Korean Ministryof Information and Communication through the ITRC supportprogram.

References[1] CombineZ5. http://www.hadleyweb.pwp.blueyonder.co.uk/.

[2] Depth-of-field adapter. http://en.wikipedia.org/wiki/Depth-of-field-adapter/.

[3] Depth of Field Generator Pro. http://www.dofpro.com/.

[4] Helicon focus. http://heliconfilter.com/.

[5] Letus35. http://www.letus35.com/.

[6] SGPro. http://www.sgpro.co.uk/.

[7] Shallow dof-tutorial. http://www.digicamhelp.com/processing-photos/advanced-editing/create-dof2.php.

[8] Simulating shallow depth of field with the GIMP.http://www.gimpguru.org/Tutorials/SimulatedDOF/.

[9] A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker,A. Colburn, B. Curless, D. Salesin, and M. Cohen. Interac-tive digital photomontage. ACM Transactions on Graphics,23:294–302, 2004.

[10] S. Cho, W. J. Tam, F. Speranza, R. Renaud, N. Hur, andS. I. Lee. Depth maps created from blur information usingimages with focus at near and at far. Proceedings of SPIE,6055:462–473, 2006.

[11] D. Comaniciu and P. Meer. Mean shift: A robust approachtoward feature space analysis. IEEE Transactions on Pat-tern Analysis and Machine Intelligence, 24:603–619, 2002.

[12] R. Cosgrove. Creating depth of field. Digital PhotographyBuyer & User, page 89, 2003.

[13] B. Forster, D. V. D. Ville, J. Berent, D. Sage, and M. Unser.Complex wavelets for extended depth-of-field: A newmethod for the fusion of multichannel microscopy images.Microscopy Research and Technique, 65:33–42, 2004.

[14] B. Forster, D. V. D. Ville, J. Berent, D. Sage, and M. Unser.Extended depth-of-focus for multi-channel microscopy im-ages: A complex wavelet approach. Proceedings of theSecond 2004 IEEE International Symposium on Biomedi-cal Imaging: From Nano to Macro, pages 660–663, 2004.

[15] M. Garland, A. Willmott, and P. S. Heckbert. Hierarchicalface clustering on polygonal surfaces. Proceedings of the2001 symposium on Interactive 3D graphics, pages 49–58,2001.

Page 7: Digital Shallow Depth-of-Field Adaptercg.postech.ac.kr/papers/digital_shallow_dof.pdf · depth-from-defocus (DFD) method [26]. 2 Related Work Creating extended deep depth-of-field

[16] R. Lacey. Creating depth of field in Paint ShopPro. http://graphicssoft.about.com/od/paintshoppro/ss/rldepthoffield.htm.

[17] J. Lin, C. Zhang, and Q. Shi. Estimating the amount ofdefocus through a wavelet transform approach. PatternRecognition Letters, 25:407–411, 2004.

[18] B. London, J. Upton, K. Kobre, and B. Brill. Photography,8th Edition. Prentice Hall, 2004.

[19] H. M. Merklinger. The INs and OUTs of FO-CUS. Internet Edition is available in PDF athttp://www.trenholm.org/hmmerk/.

[20] R. Ng. Fourier slice photography. ACM Transactions onGraphics, 24:735–744, 2005.

[21] R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, andP. Hanrahan. Light field photography with a hand-heldplenoptic camera. Stanford University Computer ScienceTech Report, 2005.

[22] A. P. Pentland. A new sense for depth of field. IEEEtransactions on pattern analysis and machine intelligence,9:523–531, 1987.

[23] R. Raskar and J. Tumblin. SIGGRAPH ’2005 course noteson computational photography. SIGGRAPH ’2005 CourseNotes, 2005.

[24] R. Raskar, J. Tumblin, M. Levoy, and S. Nayer. SIG-GRAPH ’2006 course notes on computational photography.SIGGRAPH ’2006 Course Notes, 2006.

[25] S. F. Ray. Applied Photographic Optics, 3rd Edition. FocalPress, 1988.

[26] M. Subbarao and G. Surya. Depth from defocus: A spa-tial domain approach. International Journal of ComputerVision, 13:271–294, 1994.

[27] G. Surya. Three-dimensional scene recovery from imagedefocus. Ph.D. Thesis, Dept. of Electrical Eng., SUNY atStony Brook, 1994.

[28] S. C. Tucker, W. T. Cathey, and E. R. Dowski. Extendeddepth of field and aberration control for inexpensive digitalmicroscope systems. Optics Express, 4:467, 1999.

[29] M. Watanabe and S. K. Nayar. Rational filters for passivedepth from defocus. International Journal of ComputerVision, 27:203–225, 1998.

[30] B. Weiss. Fast median and bilateral filtering. ACMTransactions on Graphics, 25:519–526, 2006.

[31] D. Ziou and F. Deschenes. Depth from defocus estimationin spatial domain. Computer Vision and Image Understand-ing, 81:143–165, 2001.

Page 8: Digital Shallow Depth-of-Field Adaptercg.postech.ac.kr/papers/digital_shallow_dof.pdf · depth-from-defocus (DFD) method [26]. 2 Related Work Creating extended deep depth-of-field

(a) image with F value of 32 (b) image with F value of 2.8

(c) target F value is 1.8 (d) target F value is 1.0

Figure 12: Shallow depth-of-field example of portrait

(a) image with F value of 22 (b) image with F value of 2.8

(c) target F value is 1.8 (d) target F value is 1.0

Figure 13: Shallow depth-of-field example of indoor environment