image enhancement using fusion

21
Under Water Image Enhancement using Fusion Techniques December 30, 2012

Upload: pi194043

Post on 16-Apr-2015

1.214 views

Category:

Documents


1 download

DESCRIPTION

Image enhancement technique using Fusion algorithm ,document contains basic description and code for algorithm

TRANSCRIPT

Page 1: Image enhancement using Fusion

Under Water ImageEnhancement usingFusion Techniques

December 30, 2012

Page 2: Image enhancement using Fusion

❈♦♥t❡♥ts

❈♦♥t❡♥ts

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t ✷

0.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

0.2 Weight Map Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

0.2.1 Contrast : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

0.2.2 Saliency : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

0.2.3 Well-exposedness: . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

0.2.4 Local Contrast Measure : . . . . . . . . . . . . . . . . . . . . . . . 6

0.3 Image Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

0.4 CODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

0.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡

❊♥❤❛♥❝❡♠❡♥t

✵✳✶ ■♥tr♦❞✉❝t✐♦♥

Image fusion is the process of combining information in two or more images of a

scene into a single enhanced image.The aim of the fusion process is to integrate com-

plementary data inorder to enhance the information in respective source images. The

method is based on the one desribed in paper by Ancuti et al., 2012 with small changes.

Here the two inputs images of fusion process are derived from gray world algorithm

and global contrast stretching algorithm.The final images is a weighted blend of the

two input images.

✵✳✷ ❲❡✐❣❤t ▼❛♣ ❈❛❧❝✉❧❛t✐♦♥

The weight map are scalar image derived from the original image to aid in the fusion

process. A weigth map’s are derived from each input image.The weights are calcu-

lated based on some local or global feature of pixel.

2 | 21

Page 3: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

Some of features commonly used are contrast,saturation and exposedness. For each

pixel infromation from different measures are combined into a scalar weight map us-

ing simple additive or multiplicative functions.

✵✳✷✳✶ ❈♦♥tr❛st ✿

The contrast of image is estimated by applying a Laplacian filter to each channel of the

image, and the absolute value of the filter response is taken.The laplacian filter will

enhance the edges/texture in the image .Edges are textured value will give high value

of contrast while homogenous regions will give low values for contrast measures.

However this measure is not capable of distinguishing between a ramp and flat regions

of the image. However It is capable of distinguishing step edges .For underwater

images this map will predominantly carry low weights .

(a) input image 1 (b) input image 2

(c) weight 1 (d) weight 2

Figure 1: Contrast Weights for input images

3 | 21

Page 4: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

✵✳✷✳✷ ❙❛❧✐❡♥❝② ✿

The saliency weight map aims to emphaize the discriminating objects that loose promi-

nance in underwater scene.saliency algorithm emphaizes such objects . The comman

method to detect saliency is to detect the contrast difference between image region

and its surrounding which is known as center surround contrast. The image saliency

algorithm is take from the paper Achanta et al., 2009

(a) input image 1 (b) input image 2

(c) weight 1 (d) weight 2

Figure 2: Saliency Weights for input images

4 | 21

Page 5: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

✵✳✷✳✸ ❲❡❧❧✲❡①♣♦s❡❞♥❡ss✿

Looking at just the raw intensities within a channel, reveals how well a pixel is ex-

posed. The aim is to retain pixels that are not over or under exposed.Each pixel is

weighted on how it is close to 128 using a gaussian function.The gaussian function is

applied to each color channel seperately and results are multiplied/added depending

on requirement of application.

(a) input image 1 (b) input image 2

(c) weight 1 (d) weight 2

Figure 3: exposedness Weights for input images

5 | 21

Page 6: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

✵✳✷✳✹ ▲♦❝❛❧ ❈♦♥tr❛st ▼❡❛s✉r❡ ✿

For underwater images the global contrast measure is not sufficient to effectively rep-

resent the contrast. A local contrast measure would be able to better represent the

contrast measure at a pixel location. The impact of this measure is to strengthen the

local contrast to capture ramp transitions highlighted and shadowed parts which are

not captured by global contrast method.

The local contrast measure is computed as the standard deviation between the pixel

luminance level and its local average around a pre-defined neighborhood.

This can be easily computed by first obtaining a low pass filtered version of the image.

The absolute value of difference between the image and its low pass version is used as

local contrast measure map.

(a) input image 1 (b) input image 2

(c) weight 1 (d) weight 2

Figure 4: local contrast Weights for input images

The weight maps are normalized so that sum of weights at each pixel location is 1.

W̄k =Wk

∑Kk=1 Wk

6 | 21

Page 7: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

✵✳✸ ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

The enhanced image R(x, y) is obtained by fusing defined inputs with weight mea-

sures at every location.

R(x, y) =K

∑k=1

Wk(x, y)Ik(x, y)

In present application two input images are derived from the a modified global con-

trast stretched image and white balence image in LabColorSpace.

The weight maps based on global contrast and exposedness are obtained for the set

of image based on saliency,exposedness,global contrast and local contrast. Thus we

have a linear combination of 2 input images and 4 weight maps a total of weight com-

bination of 8 images. Below results only only considering individual measures using

laplacian lending and using naive blending all all the masks are shown. The code for

laplacian blending was take from Roy and arnon, 2010

In naive blending some artifacts are seen,the laplacian blending method would re-

duce this effect.

Thus considering computional efficiency and requirement way may choose methods

corresponding to any one of the results.

✵✳✹ ❈❖❉❊

for code for above routines refer the site ❤tt♣✿✴✴❝♦❞❡✳❣♦♦❣❧❡✳❝♦♠✴♣✴♠✶✾✹✵✹✴s♦✉r❝❡✴

❜r♦✇s❡✴❋✉s✐♦♥❊♥❤❛♥❝❡♠❡♥t✴

7 | 21

Page 8: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

✵✳✺ ❘❡s✉❧ts

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 5: Example 1

8 | 21

Page 9: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 6: Example 2

9 | 21

Page 10: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 7: Example 3

10 | 21

Page 11: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 8: Example 4

11 | 21

Page 12: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 9: Example 5

12 | 21

Page 13: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 10: Example 5

13 | 21

Page 14: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 11: Example 6

14 | 21

Page 15: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 12: Example 8

15 | 21

Page 16: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 13: Example 9

16 | 21

Page 17: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 14: Example 10

17 | 21

Page 18: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 15: Example 10

18 | 21

Page 19: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 16: Example 11

19 | 21

Page 20: Image enhancement using Fusion

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original (b) input1 (c) input2

(d) Global contrast (e) saliency (f) exposedness

(g) local contrast (h) using naive blending

Figure 17: Example 12

20 | 21

Page 21: Image enhancement using Fusion

❇✐❜❧✐♦❣r❛♣❤②

❇✐❜❧✐♦❣r❛♣❤②

[1] Radhakrishna Achanta et al. “Frequency-tuned Salient Region Detection”. In:

IEEE International Conference on Computer Vision and Pattern Recognition (CVPR

2009). For code and supplementary material, click on the url below. Miami Beach,

Florida, 2009, pp. 1597 –1604. doi: ✶✵✳✶✶✵✾✴❈❱P❘✳✷✵✵✾✳✺✷✵✻✺✾✻. url: ❤tt♣✿

✴✴✐✈r❣✳❡♣❢❧✳❝❤✴s✉♣♣❧❡♠❡♥t❛r②❴♠❛t❡r✐❛❧✴❘❑❴❈❱P❘✵✾✴✐♥❞❡①✳❤t♠❧.

[2] Cosmin Ancuti et al. “Enhancing Underwater Images and Videos by Fusion”. In:

CVPR. 2012.

[3] Tom Mertens, Jan Kautz, and Frank Van Reeth. “Exposure Fusion”. In: Proceedings

of the Pacific Conference on Computer Graphics and Applications, Pacific Graphics 2007,

Maui, Hawaii, USA, October 29 - November 2, 2007. Ed. by Marc Alexa, Steven J.

Gortler, and Tao Ju. IEEE Computer Society, 2007, pp. 382–390. isbn: 978-0-7695-

3009-3. doi: ❤tt♣✿✴✴❞♦✐✳✐❡❡❡❝♦♠♣✉t❡rs♦❝✐❡t②✳♦r❣✴✶✵✳✶✶✵✾✴P●✳✷✵✵✼✳✶✼.

[4] Roy and arnon. simple laplacian blender using opencv. 2010. url: ❤tt♣ ✿ ✴ ✴ ✇✇✇ ✳

♠♦r❡t❤❛♥t❡❝❤♥✐❝❛❧✳❝♦♠✴✷✵✶✶✴✶✶✴✶✸✴❥✉st✲ ❛✲ s✐♠♣❧❡✲ ❧❛♣❧❛❝✐❛♥✲ ♣②r❛♠✐❞✲

❜❧❡♥❞❡r✲✉s✐♥❣✲♦♣❡♥❝✈✲✇❝♦❞❡✴.

21 | 21