a novel multiple-stage multi-focus fusion...

7
A Novel Multiple-Stage Multi-Focus Fusion Algorithm ChenShen, Zhongliang JING*, HanPan Institute of Aerospace Science and Technology, Shanghai Jiao Tong University No. 800, Dong chuan Road, Min Hang District, Shanghai 200240, China [email protected] [email protected] AbstractImage fusion is defined as the process of combining multiple source images into a single image retaining the significant details or features of each source image. Although existing multi-focus algorithms complete some improvement on the fusion performance, using each method alone may bring about geometric distortion which will be discussed in the following section. To solve this problem, this paper presents a novel algorithm that can effectively eliminate these effects caused by Discrete Wavelet Transform (DWT) and Spatial Frequency (SF). It comprises a multiple-stage fusion and a new fusion scheme on the low frequency of the DWT’s coefficients. The fusion schemes to be proposed simultaneously calculates two intermediate images, and then merge the respectively pre-fused images into a new image. Objective measures and visual assessment on the extensive experiments show that the proposed approach outperforms some well-known methods. Key words: Image Fusion, Discrete Wavelet Transform, Spatial Frequency, Multi-Focus I. INTRODUCTION Image fusion combines different aspects of information from multiple source images. Multi-focus image fusion is a process of combining two or more partially defocused images into a new image with all interested details of objects well displayed. To obtain an image with each object in focus, a multi-focus image fusion process is required to fuse the images taken under different focal length. There are three different fusion categories: pixel level, feature level and decision level [1]. Each of the levels depends on the stage where the fusion algorithm takes place. The multi-resolution fusion uses multi-scale transforms to analyze the information content of images for diverse purposes. These multi-scale fusion methods firstly calculate the multi-resolution decomposition coefficients on source images and then combine the coefficients with a composite representation and finally reconstruct fused image using inverse multi-resolution transform. These multi-scale transforms involve Laplacian pyramid [2-3], gradient pyramid [4], RoLP pyramid [5] and the Discrete Wavelet Transform (DWT) [6-8]. The fusion techniques based on image blocks aim to select sharper blocks using sharpness criterion such as Spatial Frequency (SF) [9-10] and local gradient. Recently, many pixel based multi-focus image fusion algorithms have been presented. Some improved methods based on wavelet transform such as Dual Tree Complex Wavelet Transform (DT-CWT)[11] and Low Redundancy Discrete Wavelet Frame (LRDWF)[12] are shift-invariant but inevitably bring about distortion effects. Some recently presented papers use block-based fusion scheme, these algorithms aim to select sharper image blocks and the block size has been optimized using different strategies such as Genetic Algorithm (GA)[13] and Pulse Coupled Neural Network (PCNN)[14] to obtain the best fusion result. But the fused images using block-based methods are with bad continuity and produce jaggies and large block borders. For the algorithms concerning the spatial frequency or wavelet transform, a multi-focus image fusion based on spatial frequency and morphological operators was proposed in [15], a multi-focus image fusion using region segmentation and spatial frequency was presented in [16], a multi-focus image fusion based on stationary wavelet transform and extended spatial frequency measurement was given in [17], etc. However, these methods are essentially attached to either multi-resolution analysis or block-based methods. The DWT fusion shows better local features in frequency domain, but traditional algorithms exhibit bad distortion effects along the edges and borders in the image. Since the blurred region has a relatively low SF value, the block-based method could easily discriminate the focused objects and the blurred ones. As the fused image using SF rule inherits a lot from the region in focus and as well gains undesired block effect along the edge of the blurred region, a combination of DWT fusion and SF fusion is proposed in this paper to overcome the above-mentioned defects to some extent. The new approach uses a multiple fusion strategy in which a novel fusion algorithm of DWT low frequency using local adaptive weight applies. The original approach in the DWT low frequency subband aims to separate columns and rows of a pixel’s adjacent field block and calculate the statistical properties to obtain the final weight of the pixel. As the fusion scheme includes two phases of fusion, different fusion rules of the DWT low frequency subband are performed to coordinate with the specific phases, and the new low frequency fusion of DWT will be applied in the second phase. This paper presents two fusion paths in which the detailed algorithms will be illustrated in Section IV. *Corresponding author. This paper is jointly supported by the National Natural Science Foundation of China (Grant Nos. 61175028). 218

Upload: others

Post on 17-Jul-2020

15 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Novel Multiple-Stage Multi-Focus Fusion Algorithmfusion.isif.org/proceedings/fusion12CD/html/pdf/030_80.pdf · a multi-focus image fusion using region segmentation and spatial frequency

A Novel Multiple-Stage Multi-Focus Fusion Algorithm

ChenShen, Zhongliang JING*, HanPan Institute of Aerospace Science and Technology, Shanghai Jiao Tong University

No. 800, Dong chuan Road, Min Hang District, Shanghai 200240, China [email protected] [email protected]

Abstract—Image fusion is defined as the process of combining multiple source images into a single image retaining the significant details or features of each source image. Although existing multi-focus algorithms complete some improvement on the fusion performance, using each method alone may bring about geometric distortion which will be discussed in the following section. To solve this problem, this paper presents a novel algorithm that can effectively eliminate these effects caused by Discrete Wavelet Transform (DWT) and Spatial Frequency (SF). It comprises a multiple-stage fusion and a new fusion scheme on the low frequency of the DWT’s coefficients. The fusion schemes to be proposed simultaneously calculates two intermediate images, and then merge the respectively pre-fused images into a new image. Objective measures and visual assessment on the extensive experiments show that the proposed approach outperforms some well-known methods.

Key words: Image Fusion, Discrete Wavelet Transform, Spatial Frequency, Multi-Focus

I. INTRODUCTION Image fusion combines different aspects of information

from multiple source images. Multi-focus image fusion is a process of combining two or more partially defocused images into a new image with all interested details of objects well displayed. To obtain an image with each object in focus, a multi-focus image fusion process is required to fuse the images taken under different focal length. There are three different fusion categories: pixel level, feature level and decision level [1]. Each of the levels depends on the stage where the fusion algorithm takes place. The multi-resolution fusion uses multi-scale transforms to analyze the information content of images for diverse purposes. These multi-scale fusion methods firstly calculate the multi-resolution decomposition coefficients on source images and then combine the coefficients with a composite representation and finally reconstruct fused image using inverse multi-resolution transform. These multi-scale transforms involve Laplacian pyramid [2-3], gradient pyramid [4], RoLP pyramid [5] and the Discrete Wavelet Transform (DWT) [6-8]. The fusion techniques based on image blocks aim to select sharper blocks using sharpness criterion such as Spatial Frequency (SF) [9-10] and local gradient.

Recently, many pixel based multi-focus image fusion algorithms have been presented. Some improved methods based on wavelet transform such as Dual Tree Complex Wavelet Transform (DT-CWT)[11] and Low Redundancy Discrete Wavelet Frame (LRDWF)[12] are shift-invariant but inevitably bring about distortion effects. Some recently presented papers use block-based fusion scheme, these algorithms aim to select sharper image blocks and the block size has been optimized using different strategies such as Genetic Algorithm (GA)[13] and Pulse Coupled Neural Network (PCNN)[14] to obtain the best fusion result. But the fused images using block-based methods are with bad continuity and produce jaggies and large block borders. For the algorithms concerning the spatial frequency or wavelet transform, a multi-focus image fusion based on spatial frequency and morphological operators was proposed in [15], a multi-focus image fusion using region segmentation and spatial frequency was presented in [16], a multi-focus image fusion based on stationary wavelet transform and extended spatial frequency measurement was given in [17], etc. However, these methods are essentially attached to either multi-resolution analysis or block-based methods.

The DWT fusion shows better local features in frequency domain, but traditional algorithms exhibit bad distortion effects along the edges and borders in the image. Since the blurred region has a relatively low SF value, the block-based method could easily discriminate the focused objects and the blurred ones.

As the fused image using SF rule inherits a lot from the region in focus and as well gains undesired block effect along the edge of the blurred region, a combination of DWT fusion and SF fusion is proposed in this paper to overcome the above-mentioned defects to some extent. The new approach uses a multiple fusion strategy in which a novel fusion algorithm of DWT low frequency using local adaptive weight applies. The original approach in the DWT low frequency subband aims to separate columns and rows of a pixel’s adjacent field block and calculate the statistical properties to obtain the final weight of the pixel. As the fusion scheme includes two phases of fusion, different fusion rules of the DWT low frequency subband are performed to coordinate with the specific phases, and the new low frequency fusion of DWT will be applied in the second phase. This paper presents two fusion paths in which the detailed algorithms will be illustrated in Section IV.

*Corresponding author. This paper is jointly supported by the National Natural Science Foundation of China (Grant Nos. 61175028).

218

Page 2: A Novel Multiple-Stage Multi-Focus Fusion Algorithmfusion.isif.org/proceedings/fusion12CD/html/pdf/030_80.pdf · a multi-focus image fusion using region segmentation and spatial frequency

This paper firstly presents the basic concand SF and then outlines the proposed fusionlater sections results of the risen scheme arecomparison of performance based on a variecriteria.

II. WAVELET TRANSFORM AND SFREQUENCY

The general procedure of wavelet transfoFigure 1. Given an M×N input image I w(M/2)×N images, IL and IH, by separateldown-sampling the rows in I using a low-pahigh-pass filter H. We repeat the process bdown-sampling the columns in IL and IH usingH. The output are four (M/2)×(N/2) images IHH, where ILL is a low-frequency approximatiIHL and IHH are high-frequency detail images vertical (V),horizontal (H)and diagonal (D)[8][18].

LLI ′ LHI ′

HLI ′ HHI ′

LHI

HLI HHI

Figure 1. It shows an M×N input image I decomposed intdetail images ILH, IHL and IHH and one (M/2)×(N/2) approThe image ILL is further decomposed into three (M/4)×(N/I’HL and I’HH and one (M/4)×(N/4) approximation image I

The Wavelet Transform is firstly perfsource images, and then a fusion decision mbased two fusion rules. The fused wavelet coconstructed through the wavelet coefficientimages according to the fusion path. Finally thobtained by performing the Inverse DiTransform (IDWT).

The Spatial Frequency measures the overin an image [10][16].For an M×N image Fvalue at pixel position (m,n) denoted by F(frequency is defined as

2 2 ,SF RF CF= + where RF and CF are the row frequency

1 2

1 ( ( , ) ( ,M N

m n

RF F m n F m nMN = =

= − −∑∑and column frequency

1 2

1 ( ( , ) ( 1N M

n m

CF F m n F mMN = =

= − −∑∑respectively. The result of the spatial fre

cept of the DWT n scheme. In the e presented with ety of evaluation

SPARTIAL

orm is shown in we generate two ly filtering and

ass filter L and a by filtering and

g the filters L and ILL, ILH, IHL and

ion of I, and ILH, which represent

) structures in I

to three (M/2)×(N/2) oximation image ILL.

N/4) detail images I’LH, I’LL[16].

formed on each map is generated efficients can be

ts of the source he fused image is iscrete Wavelet

rall activity level F, with the gray (m,n), its spatial

(1)

21)) ,− (2)

21, )) .n (3)

quency will be

derived via block-based computatio

III. THE ASSESSMENT OF THEWAVELET-BASED MUL

ALGORITAccording to the reconstructio

transform, the final fused image kemulti-scale reconstruction in despitartifacts. These artifacts are generedundant coefficients, inaccurate sother process in wavelet-based recently presented multi-scale tranand with low redundancy, the disperfectly eliminated. Since the musually don’t have obvious boundsblurred region, the block-based fabout jaggies, large mutations and However, the block-based selectionfrom the clear region. Recently prone of the two strategies, thdrawbacks may not achieve a betteshows these geometric distortion an

(a) (b)

(e)

(f) Figure 2. (a)Fused image using DWT (b)Fuimage between (a) and source image (d)Dsource image (e) Grid pattern of (c) (f)Grid p

From the Figure 2 above we perception. The wavelet-based fuperception, but it is with severe gefidelity. The fused image using blovery flat regions in difference imagperfect duplicates from the clear However, the blocking effects aassessment.

0

10

0

10

20

30

40

50

60

70

80

60

80

100

120

140

0

10

0

10

20

30

40

50

60

70

80

0

20

40

60

80

100

on.

E BLOCK-BASED AND LTI-FOCUS FUSION THM

on property of the wavelet eeps a nice continuity after te of ringing and distortion erated by down-sampling, selection of coefficients and

analysis. Although some nsforms are shift-invariant stortion artifacts cannot be multi-focus source images s between clear region and fusion methods may bring blocks in the fused image.

n keeps the highest fidelity resented papers merely use hen the above-mentioned er solution. Figure 2 clearly nd blocking artifacts.

(c) (d)

used image using SF (c)Difference Difference image between (b) and pattern of (d).

can obtain some intuitive usion does well in visual eometric distortion and low ock-based method has some ge, and these regions are the

regions of source image. are unfavorable to visual

20

30

40

50

60

70

80

20

30

40

50

60

70

80

219

Page 3: A Novel Multiple-Stage Multi-Focus Fusion Algorithmfusion.isif.org/proceedings/fusion12CD/html/pdf/030_80.pdf · a multi-focus image fusion using region segmentation and spatial frequency

IV. THE PROPOSED FUSION SCHEME

In this section, the details of fusion rule are discussed. The fusion algorithm is decided according to the goal of fusion. There are two proposed rules using a combination of DWT and SF. In order to simplify the process of the fusion rule using two-dimensional DWT, an assumption that there are only two multi-focus source images is supposed in the fusion scheme. All of the fusion rules described below will be easily extended in case of three source images or more.

First of all, A and B are assumed as source images focused on different regions relatively compared with the reference image. The fusion scheme is divided into a two-stage process including DWT fusion and SF fusion.

On the first stage, the DWT fusion and SF fusion is simultaneously executed and two fused images will be acquired respectively as FDWT(x,y) and FSF(x,y).The fused image FDWT(x,y) is derived by dealing with the decomposed wavelet coefficients and FSF(x,y) is derived by directly dealing with image blocks.

In the first DWT fusion, the low frequency coefficients are fused using local gradient-based weighted average scheme. Let fA(x,y) and fB(x,y) be the DWT low frequency coefficients of image A and B. Similarly, let gA(x,y) and gB(x,y) be the high frequency coefficients of image A and B. Consider an M×N window (5×5 in this paper) centered at location (x,y), calculate its average gradient which can be described as

( 1)/2 ( 1)/22 2

( 1)/2 ( 1)/2

1 ( , ) ( , ) ,M N

i M j N

AG xf x i y j yf x i y jM N

− −

=− − =− −

= Δ + + +Δ + +× ∑ ∑

(4)

where ( , )xf x i y jΔ + + and ( , )yf x i y jΔ + + denote the first order difference at location (x+i,y+j) along x-axis and y-axis direction respectively. Assuming AGA and AGB represent the average gradient of fA(x,y) and fB(x,y) at location (x,y), then the fused low frequency sub-image can be written as

( , ) ( , ) ( , ),A B

F A BA B A B

AG AGf x y f x y f x yAG AG AG AG

= ⋅ + ⋅+ +

(5)

For DWT high frequency coefficients, Burt and Kolczynski have proposed a window-based fusion rule [19], and it can be suitable for dealing with multi-focus image fusion. This algorithm is summarized below. Choose a small neighborhood window of M×N (5×5 in this paper) centered on point (x,y) of both A and B, calculate its weighted local energy as salience measure of it

2

[ , ], [ , ]( , ) ( , ) [ ( , )] , , ,I I

l L L t T TS x y w l t g x l y t I A B

∈ − ∈ −

= ⋅ + + =∑ (6)

where ( , )w l t denotes a coefficient template,

[ , ], [ , ]( , ) 1

l L L t T Tw l t

∈ − ∈ −

=∑ and 2L+1=M, 2T+1=N. Then the

local normalized correlation of the window is defined as

[ , ], [ , ],

2 ( , ) ( , ) ( , )( , ) ,

( , ) ( , )

A B

l L L t T TA BA B

wl t g x l y t g x l y tM x y

S x y S x y∈ − ∈−

⋅ + + ⋅ + +=

+

∑ (7)

At each position (x,y), assume a threshold [0,1]α ∈ , if the match measure , ( , )A BM x y satisfies , ( , )A BM x y α≤ , then the selected coefficient can be written as

( , ), if ( , ) ( , )( , ) ,

( , ), if ( , ) ( , )

A A BF

B A B

g x y S x y S x yg x y

g x y S x y S x y

⎧ ≥⎪= ⎨<⎪⎩

(8)

else if , ( , )A BM x y α> , the combined result will be implemented as

max min

min max

( , ) ( , ), if ( , ) ( , )( , ) ,

( , ) ( , ), if ( , ) ( , )

A B A BF

A B A B

g x y g x y S x y S x yg x y

g x y g x y S x y S x y

ω ωω ω⎧ ⋅ + ⋅ ≥⎪=⎨

⋅ + ⋅ <⎪⎩(9)

where ,

min1 1 1 ( , )2 2 1

A BM x yωα

⎛ ⎞−= − ⎜ ⎟−⎝ ⎠ and max min1ω ω= − .

As for FSF(x,y), it is obtained via the above mentioned spatial frequency method. Spatial frequency rule is applied through these steps[10]:

1. Decompose the source images into k blocks of size M×N. 2. Compute the above-mentioned spatial frequency for each

block. 3. Compare the spatial frequency of two corresponding

blocks Ak and Bk, then construct the kth block Fk of the fused image as

, if

, if ( ) / 2, otherwise.

A Bk k k

SF A Bk k k k

k k

A SF SF Threshold

F B SF SF ThresholdA B

⎧ > +⎪

= < −⎨⎪ +⎩

(10)

On the second stage, the intermediate images FDWT(x,y) and FSF(x,y) will be fused again to generate the final fused F(x,y) using two schemes respectively:

Rule A: Generate the final fused image using spatial frequency rule with blocks of relatively higher resolution.

Rule B: Generate the final fused image using a novel DWT fusion method at a relatively low decomposition level.

The reason why we do not use the same fusion scheme of low frequency subband described in the first stage is that the intermediate image is with higher sharpness and exhibit some ringing effects generated by the DWT fusion, and the region with ringing effects will gain local gradient values which makes mistakes in clear region detection. If we still use the gradient features to detect clear region, these bad ringing effects will still be preserved in the finally fused image.

The novel wavelet-based fusion algorithm uses a new low frequency fusion strategy based on neighborhood operation and adaptive weights. Given two low frequency sub-images fA(x,y) and fB(x,y) of intermediate images FDWT(x,y) and FSF(x,y), choose a small neighborhood window of N×N (5×5 in this paper) centered on point (x,y), then we regard this window as N×N matrix W. The matrix W can be partitioned by columns and rows as

220

Page 4: A Novel Multiple-Stage Multi-Focus Fusion Algorithmfusion.isif.org/proceedings/fusion12CD/html/pdf/030_80.pdf · a multi-focus image fusion using region segmentation and spatial frequency

( ) ,

⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠

1

2

1 2 3 N 3

N

qq

W= p ,p ,p , ,p = q

q

(11)

Consider the deviation matrix of column vectors ip (i=1, 2, … , N) as

1

1 ( )( ) ,N

T

iN =

− −∑ i iP = p p p p (12)

where p denotes the mean vector of ip . Similarly, we have

1

1 ( )( ) ,N

T

iN =

− −∑ i iQ = q q q q (13)

Let iλ (i=1, 2, … , N) be the eigenvalues of matrix P, then we calculate the trace of matrix P and use the fact that

2

1 1

1( ) ,N N

ii i

TrN

λ= =

= − =∑ ∑iP p p (14)

From the equation above, it can be easily found that the trace of matrix P indicates mean square Euclidean distance of columns which describes the distribution of each column vector around their mean vector. Thus with larger trace or larger sum of eigenvalues, the column vectors tend to have a relatively ‘rough’ distribution around their mean vector. Taking rows of W into consideration, we define the overall significance level of point (x,y) as

2 2

21 1

( , ) ( ) ( )

1 , ,

I

N N

i i

Sig x y Tr Tr

I A BN = =

= ⋅

= − ⋅ − =∑ ∑i i

P Q

p p q q (15)

If the neighborhood window of point (x,y) is with relatively high sharpness, its columns and rows will gain ‘rough’ distribution around their mean vectors, that means a larger value of Sig(x,y), and vice versa. Then the fused low frequency sub-image can be written as

( , )( , ) ( , )( , ) ( , )

( , ) + ( , ).( , ) ( , )

AF A

A B

BB

A B

Sig x yf x y f x ySig x y Sig x y

Sig x y f x ySig x y Sig x y

= ⋅+

⋅+

(16)

As for high frequency coefficients, the fusion scheme will still conform to above-mentioned algorithm. Experiments show that this new low frequency fusion algorithm performs the best with high resolution sub-images, thus with a low decomposition level of the DWT, the approximate sub-images have high resolution and the fusion result will be the best.

The two proposed fusion rules on the second stage are used to partially eliminate distortion and blocking effects brought by the DWT and SF on the first stage. These artifacts come from the fusion on the first stage. Then after the fusion on the second stage, blurred regions will be replaced by relatively clear regions and geometric distortions will be alleviated.

Figure 3 represents the schematic diagram for the entire fusion rule:

( , )DWTF x y ( , )SFF x y

Figure 3. Entire fusion process using DWT and SF.

V. EXPERIMENT RESULTS The concrete formation implemented on the fusion process

using the DWT and SF is shown as follows. During the experiment period, two different images focused at two different objects are provided for fusion. These two images have been spatially registered. The experiment involves three tests. First test uses artificially blurred images with reference images while the last two tests use naturally blurred images with no reference images.

In order to compare the proposed rules and traditional ones, we employ three additional multi-resolution based fusion algorithms and one block-based method which are the DWT, the SF, the Dual Tree-Complex Wavelet Transform(DT-CWT) [20] and the Low Redundancy Discrete Wavelet Frame(LRDWF) [12]. All parameters here are set to reach the best performance. For the multi-resolution based methods DWT, DT-CWT and LRDWF, the coefficients are combined by choosing maximum absolute values of transformation coefficients. The wavelet function ‘db9’ is chosen in DWT fusion method and the decomposition level is 4 for the best performance. The basis ‘db4’ is chosen in the DT-CWT and LRDWF methods and decomposition level is 4 for DT-CWT and 6 for LRDWF to obtain better results. For the traditional SF method, block size will be chosen adaptively for the best performance. The evaluation criteria used here are RMSE (Root Mean Square Error), PSNR (Peak Signal-to-Noise Ratio), QAB/F [21][22] and MI (Mutual Information)[23].

The first implementation of artificial blurred images with resolution of 512×512 is shown in Figure 4. For rule A, images are divided into 16×16 blocks on the first stage and 32×32 blocks on the second stage in the process of the SF fusion, and the DWT decomposition level is 5. As for rule B, the decomposition level of the DWT is 6 on the first stage and

221

Page 5: A Novel Multiple-Stage Multi-Focus Fusion Algorithmfusion.isif.org/proceedings/fusion12CD/html/pdf/030_80.pdf · a multi-focus image fusion using region segmentation and spatial frequency

is changed to 1 on the second stage, and imainto 16×16 blocks on the first stage using SF.

(a) (b)

(d) (e)

(g) (h)

Figure 4. Test 1: source images and fused images. (b)Source image focused on the foreground (c)Source imbackground (d)Fused image using DWT (e)Fused imagimage using DT-CWT (g)Fused image using LRDWF taking fusion rule A (i)Fused image by taking fusion rule B

A comparison of the results in terms ocriteria is shown in Table 1.

TABLE 1. COMPARISON OF FUSION RESULTS FOREFERENCES

Test 1

Method RMSE PSNR QA

DWT 1.8108 42.9733 0.684

SF 1.3451 45.5555 0.709

DT-CWT 1.2888 42.3366 0.698

LRDWF 1.5033 42.3718 0.695

Rule A 1.2675 46.0715 0.960

Rule B 1.2836 45.9624 0.955

Test 2 illustrates an experiment on nimages with resolution of 1280×960. For ruldivided into 32×32 blocks on the first stage anon the second stage in the process of SF fudecomposition level is 5. As for rule B, thelevel of the DWT is 4 on the first stage and is the second stage, and images are divided inton the first stage using SF.

ages are divided

(c)

(f)

(i) (a)Reference image

mage focused on the ge using SF (f)Fused

(h)Fused image by B.

f the evaluation

OR IMAGES WITH

AB/F MI

44 7.4633

98 9.2419

88 7.8237

56 7.6043

01 12.6606

56 12.5157

naturally blurred le A, images are nd 64×64 blocks usion, and DWT e decomposition changed to 1 on

to 64×64 blocks

(a)

(c)

(e)

(g)

Figure 5. Test 2: source images and fused imthe left (b)Source image focused on the r(d)Fused image using SF (e)Fused imageusing LRDWF (g)Fused image by taking taking fusion rule B.

Another test results on naturresolution of 512×512 is show in Fare divided into 32×32 blocks on blocks on the second stage in the DWT decomposition level is decomposition level of DWT is 5changed to 2 on the second stage, a32×32 blocks on the first stage usin

(a) (b)

(b)

(d)

(f)

(h)

mages. (a)Source image focused on right (c)Fused image using DWT e using DT-CWT (f)Fused image fusion rule A (h)Fused image by

rally blurred images with Figure 6. For rule A, images

the first stage and 64×64 process of SF fusion, and 5. As for rule B, the

5 on the first stage and is and images are divided into ng SF.

(c)

222

Page 6: A Novel Multiple-Stage Multi-Focus Fusion Algorithmfusion.isif.org/proceedings/fusion12CD/html/pdf/030_80.pdf · a multi-focus image fusion using region segmentation and spatial frequency

(d) (e)

(g) (h)

Figure 6 Test 3: source images and fused images. (a)Soon the left (b)Source image focused on the right (c)Fuse(d)Fused image using SF (e)Fused image using DT-Cusing LRDWF (g)Fused image by taking fusion rule Ataking fusion rule B.

The objective performance assessments are 2.

TABLE 2. COMPARISON OF FUSION RESULTWITHOUT REFERENCES

Test 2

Method QAB/F

DWT 0.6475

SF 0.7052

DT-CWT 0.6821

LRDWF 0.6742

Proposed rule A 0.9483

Proposed rule B 0.9456

Test 3

DWT 0.7486

SF 0.7980

DT-CWT 0.7773

LRDWF 0.7735

Proposed rule A 0.9430

Proposed rule B 0.9357

It can be easily found from the above pproposed approaches show better visuawavelet-based methods and SF fusion ruledistortion effects and de-blocking. Some locabelow may clearly present the amelioration effects.

(a) Clear region (b) SF (c) Rule A

Figure 7. Local magnifications for de-block

(f)

ource image focused ed image using DWT WT (f)Fused image

A (h)Fused image by

shown in Table

TS FOR IMAGES

MI

5.9826

8.6629

6.5843

6.4981

9.8104

9.9081

6.3184

7.6934

6.7656

6.6787

10.0023

9.9143

pictures that the al effects than es in terms of al magnifications

of these visual

(d) Rule B

king

(a) Clear region (b) DWT (c

Figure 8. Local magnification

The above table values andcombined approach works well assessments, the proposed rules reeffect to some extent. And the objeevidently supports the two schevaluation for the comparison withand QAB/F measure for the compmeans that the proposed scheme from multi-sensor. The comparisonincreases the reliability of propHowever, these methods still havdistortion and blocking effect eliminated , the fusion process is combined approach is time-consum

VI. CONCLIn this paper, a new approach

proposed. The proposed methods imDWT and SF fusion, and use differof fusion. This paper also propalgorithm on DWT low frequency cresults conducted with two poutperform some existing image futhe above-mentioned evaluation crirecently proposed fusion rules, the improvement in objective measurWith the integration of SF and image shows a promising approachperformance. The proposed rules arrelatively low degree of informabove-mentioned defect still existdone to improve the results.

REFEREN[1] Level C. Pohl, Multi-sensor image fu

methods and applications, Internatio19(5), (1998), pp. 823-854.

[2] Burt, P. T. & Andelson, E.H., The image code, IEEE Transactions on C532-540.

[3] Burt, P. T. & Andelson, E.H., Mdecomposition, Proceedings of SPIE, 5

[4] P. J. Burt, A gradient pyramid basis foProceedings of the Society for Informpp. 467-470.

[5] A. Toet, Multiscale contrast enhancefusion, Optical Engineering, 31, (1992

[6] T. Huntsberger & B. Jawerth, Waveletof SPIE, 2059, (1993), pp. 488-498.

[7] Mallat, S.G., A theory for multiresowavelet representation, IEEE Transa(1989), pp. 674-693.

[8] Yocky, D. A., Image merging and dattwo dimensional wavelet transform, J

c) Rule A (d) Rule B

ns for distortion

d pictures show that the in many cases. In visual elieve distortion and block ective value assessment also hemes, especially in MI h some traditional methods parison with references. It preserve more information

n with the referred methods posed ones in this paper. ve some shortcomings: the

cannot be thoroughly very sophisticated and the

ming.

LUSION based on DWT and SF is

mplement a combination of rent strategies in each phase poses an effective fusion coefficients. The simulation proposed fusion schemes usion techniques in terms of iteria. Compared with some new schemes also make an res and visual perception. DWT methods, the fused

h for improving final fusion re more accurate and keep a

mation loss. However, the ts. Further research can be

NCES usion in remote sensing: concepts, onal Journal of Remote Sensing,

Laplacian pyramid as a compact Communications, 31, (1983), pp.

Merging images through pattern 575, (1985), pp. 173-181. or pattern selective images fusion, ation Display Conference, (1992),

ement with applications to image ), pp. 1026-1031. t based sensor fusion, Proceedings

lution signal decomposition: The actions on Pattern Analysis, 11,

ta fusion by means of the discrete Journal of the Optical Society of

223

Page 7: A Novel Multiple-Stage Multi-Focus Fusion Algorithmfusion.isif.org/proceedings/fusion12CD/html/pdf/030_80.pdf · a multi-focus image fusion using region segmentation and spatial frequency

America A: Optics, Image Science, and Vision, 2, (1995), pp. 1834-1841.

[9] Eskicioglu, A.M. & Fisher, P. S., Image quality measures and their performance, IEEE Transactions on Communications, 43, (1995), pp. 2959-2965.

[10] Shutao Li, James T. Kowk and Yaonan Wang, Combination of images with diverse focuses using the spatial frequency, Information Fusion, Vol.2, Issue 3, (2001), pp. 169-176.

[11] Hill, P., Canagarajah, N., & Bull, D.. Image fusion using complex wavelets. Proceedings of the 13th British Machine Vision Conf. (2002), pp. 487-496.

[12] B. Yang, Z.L. Jing, Image fusion using a low-redundant and nearly shift-invariant discrete wavelet frame, Optical Engineering, 46, (2007) 107002.

[13] Kong, J., Zheng, K., Zhang, J., & Feng, X, Multi-focus image fusion using spatial frequency and genetic algorithm, International Journal of Computing Science and Network Security, 8, (2008), pp. 220–224.

[14] Huang, W., & Jing, Z. Multi-focus image fusion using pulse coupled neural network, Pattern Recognition Letters, 28, (2007a), pp. 1123–1132.

[15] Bin Yang & Shutao Li, Multi-focus image fusion based on spatial frequency and morphological operators, Chinese Optics Letters, Vol.5, Issue 8, (2007), pp. 452-453.

[16] Shutao Li & Bin Yang, Multifocus image fusion using region segmentation and spatial frequency, Image and Vision Computing, Vol.26, (2008), pp. 971-979.

[17] Pusit Borwonwatanadelok, Wirat Rattanapitak and Somkait Udomhunsakul, Multi-focus image fusion based on stationary wavelet transform and extended spatial frequency measurement, 2009 International Conference on Digital Object Identifier, (2009), pp. 77-81.

[18] H. B. Mitchell, Image fusion: theories, techniques and applications, (2010), pp. 94-100.

[19] P. J. Burt and R. J. Kolczynski, Enhanced image capture through fusion, Proceedings of the 4th International Conference on Computer Vision, (1993), pp. 173-182.

[20] Zhang Xiu-Qiong, Gao Zhi-Sheng, Yuan Hong-Zhao, Dynamic infrared and visible image sequence fusion based on DT-CWT using GGD, International Conference on Computer Science and Information Technology, (2008), pp. 875-878.

[21] V. Petrovic, C. Xydeas, Sensor noise effects on signal-level image fusion performance, Information Fusion, 4(3), (2003), pp. 167-183.

[22] C. Xydeas, V. Petrovic, Objective image fusion performance measure, Electronic Letters, 36(4), (2000), pp. 308-309.

[23] G. Qu, D.Zhang, P. Yan, Information measure for performance of image fusion, Electronic Letters, 38(7), (2001), pp. 313-315.

224