a neuro-fuzzy system for medical image · pdf file · 2015-05-13a neuro-fuzzy...

8

Click here to load reader

Upload: hoangnhan

Post on 19-Mar-2018

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: A Neuro-Fuzzy System for Medical Image · PDF file · 2015-05-13A Neuro-Fuzzy System for Medical Image Fusion ... can develop the ground functions using the redundancy so ... Neuro

International Journal of Advanced Engineering Research and Technology (IJAERT) Volume 3 Issue 4, April 2015, ISSN No.: 2348 – 8190

145

www.ijaert.org

A Neuro-Fuzzy System for Medical Image Fusion

Prof. V.N.Ghodke, Vaishali D. Fegade, Monika R. Barmukh and Rajshree M. Bhoir Electronics and Telecommunication Department, AISSMS Institute of Information Technology, Pune

SavitribaiPhule Pune University Ganeshkhindroad, Pune, India

Abstract: Our paper addresses a novel system to

multimodal Medical Image Fusion (MIF) problem,

applying geometric analysis of non-subsampled

contourlet transform (NSCT) and reduced pulse coupled

neural network (RPCNN). For the multiscale transform

method, the low pass sub-band coefficients are so hard to

represent that they cannot extract significant features

from images. NSCT is used to perform a multiscale

decomposition of source images to express the details of

images and we present a dictionary learning scheme in

domain of NSCT, based on which we can show low

frequency information of the image in order to extract the

salient features of images. The linking strengths of the

RPCNNs neurons are adaptively set representing their

significance in the corresponding original image. Use of

the RPCNN with having minimum number of parameters

leads to computational efficiency an important

requirement of point-of-care health care technologies.

This system is free from loss of image fine details,

contrast reduction, unwanted image degradations, etc.

Subjective and objective calculations show better

performance of this new system compared to the existing

methods also it can reduce the calculation cost of the

fusion algorithm with sparse representation by the way of

non-overlapping blocking. The results of this paper show

that this method outperforms both the fusion method

based on multistage decomposition.

Keywords: Non-subsampled Contourlet Transform

(NSCT), Reduced Pulse Coupled Neural Network

(RPCNN), Neuro-Fuzzy, Medical Image Fusion (MIF),

Computed Tomography (CT), Magnetic Resonance Image

(MRI),

I. INTRODUCTION A number of image processing tasks are efficiently

carried out in a domain other than the pixel domain, often

by means of revertible linear transformation. This

transform technique can be redundant or not, depending

on whether the set of groundfunction is independent. We

can develop the ground functions using the redundancy so

that the representation is more efficient in capturing some

signal behavior. In the multiscale expansions

implemented with filter banks, dropping the ground

requirement offers expansion which is independent of

shifting, an important property in a lot of applications.

Several situations in image processing require high spatial

and high spectral resolution in an image. Most of the

existingtransformation techniques (Wavelet Transform,

Ripplet Transform) are not able to provide such data

easily [1], [2], [3]. Image fusion techniques allow the

addition of different sources of information. The resultant

image is possible to have spatial and spectral resolution

properties. However, the common techniques of image

fusion can distort the spectral information of the

multispectral data while merging.

Image fusion is used within medical diagnostics and

treatment. The term is used when more than one images

of a patient are registered and overlaid or merged to

provide extra information. In radiology, image fusion

gives addition of the content from original images. For

example, Computed Tomography (CT) images determine

differences in tissue density while Magnetic Resonance

Images (MRI) is generally used for diagnosis of brain

tumors. Radiologists must add information from different

image formats, for accurate results. The contourlet

transform is a directional transform that is constructed

using the combination of the Laplacian pyramid (LP) and

the directional filter bank (DFB). The contourlet

transform is not-shift-invariant due to the presence of the

downsamplers and upsamplers in both the LP and DFB.

The non-subsampled contourlet transform (NSCT) is

obtained using a nonsubsampled pyramid structure

coupled with the NSDFB. In this paper, we direct the

design problem of the NSCT and show its effectiveness in

image fusion.

A PCNN is a 2-D neural network. Each neuron in the

network related to one pixel in source image which

receives its corresponding pixel’s color information (e.g.

intensity). Also each neuron receives its corresponding

local pixel’s color information. Thisinformation is

combined in an internal activation system, which acquires

the stimuli until it crosses its threshold, giving rise to a

pulse output. Through repetitive calculation, PCNN

neurons produce momentary series of pulse outputs.

These outputs contain information of source images and

Page 2: A Neuro-Fuzzy System for Medical Image · PDF file · 2015-05-13A Neuro-Fuzzy System for Medical Image Fusion ... can develop the ground functions using the redundancy so ... Neuro

International Journal of Advanced Engineering Research and Technology (IJAERT) Volume 3 Issue 4, April 2015, ISSN No.: 2348 – 8190

146

www.ijaert.org

are used for various image processing applications.

PCNN has few more merits as compared with various

image processing techniques.

II. NONSUBSAMPLED CONTOURLET

TRANSFORM

NSCT is a shift-invariant, having multidirectional

expansion with fast implement ability. NSCT achieves

the shift-invariance property [not present in contourlet

transform (CNT)] by using the nonsubsampled pyramid

filter bank (NSP) and the nonsubsampled directional filter

bank (NSDFB). The NSCT construction can thus be

divided into two parts:

(1) A nonsubsampled pyramid structure which shows the

multiscale property and

(2) A nonsubsampled DFB structure which gives

directionality. We describe each part in detail as follows:

1) Nonsubsampled Pyramid Filter Bank (NSPFB):

The multiscale property of the NSCT is obtained from a

shift-invariant filtering structure that achieves sub-band

decomposition.Experiment is done by nonsubsampled 2-

D filters.NSP takes the multiscale characteristics of

NSCT and has no down-sampling or up-sampling and

hence is shift-invariant. It is constructed by the repeated

cycles of nonsubsampled filter bank (NSFB), one of low-

frequency (LF) and high-frequency (HF) image is

produced at each NSP decomposition level.

The subsequent NSP decomposition stages are carried out

to decompose the low-frequency component available

iteratively to capture the singularities in the image. As a

result, NSP results in k + 1 subimages (having the same

size as that of the source image), which consist of one

low-frequency image and k high-frequency images, where

k denotes the number of decomposition levels.

2) Nonsubsampled Directional Filter Bank (NSDFB):

The directional filter bank is constructed by combining

critically-sampled two-channel fan filter banks and

resampling operations. The NSDFB is constructed by

eliminating the downsamplers and upsamplers of the DFB

[4].This results in a tree composed of NSFBs. The

NSDFB does direction decomposition with l stages in

high-frequency images from NSP at each stage, and

produces 2l directional subimages with the same size as

the original image.Hence NSDFB enables NSCT with the

multidirectional property and provides more precise

directional detail and fine information. The outputs of two

levels of filters are combined to get the directional

frequency decompositions.

The synthesis filter bank is obtained by similar way.All

the filter banks in the NSDFB-tree structure are obtained

from a single NSFB.To get multidirectional

decomposition, the NSDFBs are iterated, and to get the

next level decomposition, all the filters are upsampled by

a matrix given by QM = [1 1; 1 −1].NSCT is obtained by

adding of the NSP and the NSDFB. In this experiment,

the decomposition parameter of NSCT is set at levels =

[1, 2, 2] and we use “pyrexc” and “pkva” as the pyramid

filter and orientation filter, respectively. With this

decomposition configuration, the number of sub-band

images obtained is 11. Each obtained sub-band is of the

same size as the source medical image which help in

finding the relationship between the different sub-bands,

which is good for designing the fusion rule and helpful

for avoiding the pseudo-Gibbs phenomenon.

Fig.1. Nonsubsampled contourlet transform-NSFB

structure that implements the NSCT.

Moreover, NSCT has better frequency selectivity and

regularity than the other MGA tools and is capable of

capturing the fine details present in the image.

Furthermore, NSCT provides a sparse representation of

signals and structurally conforms to the frequency

sensitivity distribution of the HVS. These facts motivate

us to utilize NSCT to develop our MIF scheme.

III. REDUCED PULSE COUPLED NEURAL

NETWORK The Pulse Coupled Neural Network (PCNN) is a

biological model based on the mammalian visual cortex,

expressed by Eckhorn [5]. The PCNN is advisable to

solve tasks as the feature generation for image and pattern

recognition, etc.

Structure of PCNN:

The structure of standard PCNN comes out from structure

of an image, which is processed. That means PCNN is

single layered, 2-D, laterally connected neural network of

Page 3: A Neuro-Fuzzy System for Medical Image · PDF file · 2015-05-13A Neuro-Fuzzy System for Medical Image Fusion ... can develop the ground functions using the redundancy so ... Neuro

International Journal of Advanced Engineering Research and Technology (IJAERT) Volume 3 Issue 4, April 2015, ISSN No.: 2348 – 8190

147

www.ijaert.org

pulse coupled neurons, which are connected to pixel in

the image. Every pixel is associated with a pulse coupled

neuron of specific structure. PCNN neuron consists of

three parts viz., feeding, linking and a pulse generator

[6]as shown in Fig. below:

Fig.2. Structure of PCNN.

The signals are received by neurons from feeding and

linking inputs. The neuron receptive area which is

primary or feeding input consists of the neighboring

pixels of corresponding pixel in the source image. The

linking is secondary input of lateral connections along

with neighboring neurons.The difference between these

inputs is that the feeding connections have a slower

characteristic response time constant than the linking

connections.

The standard PCNN model is described as iteration by the

following equations:

Fi,j[n] = Si,j (1)

Li,j[n] =∑k,lWi,j,k,lYi,j[n − 1] (2)

Ui,j[n] = Fi,j[n](1 + βLi,j [n]) (3)

Yi,j[n] = 1, 𝑈𝑖, 𝑗[𝑛] > 𝑇𝑖, 𝑗[𝑛 − 1]𝑜, 𝑂𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

(4)

Ti,j[n] = e−αT

Ti,j[n-1] + VT Yi,j[n] (5)

The neuron receives the input signals from feeding and

linking inputs. To improve computing efficiency (in

terms of reducing the number of optimizing parameters),

we use an RPCNN model taken from where the indexes 'i'

and 'j' refer to the pixel (coefficient) location in the image

considering the applications of multimodal MIF. 'k' and 'l'

refer to dislocation in a symmetric neighborhood around a

pixel (coefficient), and 'n' denotes current iteration. 'F'

and 'L' are called feeding and linking inputs, respectively.

'Wi,j,k,l' represents the synaptic weight coefficient and 'Si,j'

denotes external stimulus. The linking modulation is

given in equation (3), where 'Ui,j [n]' is internal state of a

neuron and 'â' is linking strength parameter. The pulse

generator estimates the firing events in the model in

equation (4). The value of 'Yi,j[n]' depends on internal

state and threshold. The dynamic threshold of the neuron

is given in equation (5), where 'αT' and 'VT' are time and

normalized constant respectively.

IV. NEURO-FUZZY Neuro fuzzy logic based image fusion requires some

fundamentals to be discussed.

1. Neural Network

Neural Network (NN), is a natural propensity for storing

experiential knowledge and making it available for use.

NNs can provide suitable solutions for problems, which

are generally characterized by non-linearity, high

dimensionality noisy, complicated, imprecise, not perfect

or error prone sensor data, and not clearly stated

mathematical solution or algorithm. A key benefit of NN

is that a model of the system or subject can be built just

from the data.

2. Network Properties

The topology of NN refers to its framework as well as its

inter-connection scheme. The framework is often

specified by the number of layers and the number of

nodes per layer [7].

V. METHODOLOGY In this method, coefficients of both the low frequency

sub-bands (LFSs) and high-frequency sub-bands (HFSs)

are fused in a similar way using RPCNNs with fuzzy

adaptive linking strengths. The notations used are as

given as:

I = (A, B, C) where A, B, C represent the two source

images and the resultant fused image, respectively. The

value BIs,d(i, j) indicates a coefficient of the sub-band B of

the image I at the scale s (= 1, . . . , S) and direction d,

where S is the coarsest scale, and (i, j) denotes the spatial

location of the coefficient in the sub-band. This method

can be easily extended to more than two images.

A. Fuzzy Adaptive Linking Strength

From the PCNN-related literature, we know that the

linking strength (β) reflects the pixel’s (coefficient)

characteristics and should be adaptive to the importance

(significance) of the corresponding pixel (coefficient).

Moreover, from the HVS model-related literature, it has

Page 4: A Neuro-Fuzzy System for Medical Image · PDF file · 2015-05-13A Neuro-Fuzzy System for Medical Image Fusion ... can develop the ground functions using the redundancy so ... Neuro

International Journal of Advanced Engineering Research and Technology (IJAERT) Volume 3 Issue 4, April 2015, ISSN No.: 2348 – 8190

148

www.ijaert.org

been found that the contrast enhancement mechanism and

the incremental visual threshold can be effectively

modeled as a nonlinear system, which following the HVS

decides the visually significant or insignificant pixels

with respect to its neighbors [8].The uncertainty exists in

deciding the visual quality (significance) of the image’s

pixel (coefficient) and the subjectivity of the HVS

response is successfully handled by the fuzzy-logic

systems [9], [10]. Keeping these in mind, we propose to

use a novel fuzzy-based technique to adaptively set the

value of β by estimating each coefficient’s significance

(importance) in the corresponding image.

If a coefficient’s “local average energy (LAE)” is large or

if its “local information entropy (LIE)” is high, then we

can say that the coefficient has more importance in that

image.We have taken LAEIs,d(i, j) and LIE

Is,d(i, j) as the

representations of a coefficient’s “local average energy”

and its “local information entropy” respectively.LAE

gives the information about the existence of edges,

contours, and textures in an image. Similarly, LIE

indicates the complexity or unpredictability of a local

region. The regions corresponding to high-signal

complexity tend to have flatter distributions and hence

higher entropy, and these regions are considered to be the

important regions (edges, contours, and texture

information) of the image [11]. For a coefficient BIs,d(i, j),

LAEIs,d(i, j) and LIE

Is,d(i, j) are computed according to (6)

and (7), respectively, considering a window of size M ×

N centered around the coefficient:

LAEIs,d(B

Is,d(i,j))=

1

𝑀∗𝑁 𝑁𝑛=1𝑀𝑚=1 B

Is,d(m,n)

2 (6)

LIE(BIs,d(i, j)) = -∑ p(B

Is,d(i, j)) log2 p(B

Is,d(i, j)) (7)

Wherep(BIs,d(i, j)) is the probability of occurrence of the

coefficient BIs,d(i, j).

The fuzzy membership values µ1(BIs,d(i, j)) and µ2(B

Is,d(i,

j)) corresponding to LIEIs,d(i, j) and LIE

Is,d(i, j),

respectively are computed as follows:

µ1(BIs,d(i,j))=0, LAE

Is,d(i, j) ≤a1

= 2(LAE Is,d (i,j )−a1

c1−a1 )^2,a1≤LAE

Is,d(i, j)≤b1

=1-2(LAE Is,d (i,j )−a1

c1−a1 )^2, b1≤LAE

Is,d(i,j)≤c1

=1, LAEIs,d(i, j) ≥c1 (8)

And

μ2(BIs,d(i, j)) =0, LIE

Is,d(i, j) ≤ a2

=2((LAE Is,d (i,j )−a2

c2−a2 )^2, a2≤LAE

Is,d(i, j)≤b2

=1-2(LAE Is,d (i,j )−a2

c2−a2 )^2, b2≤LAE

Is,d(i,j)≤c2

=1, LAEIs,d(i, j) ≥c2 (9)

Where,

b1 = average(LAEIs,d),

c1 = b1 + max(|b1 –max(LAEIs,d)|, |b1 − min(LAE

Is,d)|),

a1 = 2b1 − c1,

and similarly,

b2= average(LIEIs,d),

c2 = b2 + max(|b2 − max(LIEIs,d)|,|b2 − min(LIE

Is,d)|), a2 =

2b2 − c2 .

Here, bk is the cross-overpoint, ck is the shoulder point,

and ak is the feet point of S-type membership curve, k =

1, 2 (considering the two source images). The linking

strength βs,d,I

i,j corresponding to the coefficientBIs,d(i, j) is

then computed as follows:

βs,d,I

i,j= max(μ1 (BIs,d(i, j)), μ2 (B

Is,d(i, j))). (10)

Fig.3. Block Diagram of the MIF method.

B. Algorithm

Consider that the medical images to be fused are co-

registered to ensure that the corresponding pixels are

aligned; here we outline the salient steps of the MIF

method.

1)Decompose the source medical images Aand Bby using

NSCT to get the LFSs and HFSs.

2) Compute the linking strengths

βs,d,I

i,j, I = (A,B).

Page 5: A Neuro-Fuzzy System for Medical Image · PDF file · 2015-05-13A Neuro-Fuzzy System for Medical Image Fusion ... can develop the ground functions using the redundancy so ... Neuro

International Journal of Advanced Engineering Research and Technology (IJAERT) Volume 3 Issue 4, April 2015, ISSN No.: 2348 – 8190

149

www.ijaert.org

3) Input the coefficients of the sub-bands to actuate the

RPCNNs and generate pulse of neurons using Equations

(1)–(5), and compute the firing times

GIs,d(i, j) = 𝑁𝑛=1 Y

s,d,Ii,j[n],I = (A,B).

4) At n = N (total number of iterations), determine the

fused coefficient BCs,d(i, j) following the fusion rule:

BC

s,d(i, j) = BA

s,d(i, j), GIs,d(i, j) ≥G

Is,d(i, j),

= BB

s,d(i, j), otherwise . (11)

5) Apply inverse non-subsampled contourlet transform on

fused coefficients to get the final fused medical image M.

VI. EXPERIMENTAL RESULTS For the evaluation of the performance of the technique,

extensive experiments were carried out on medical

images consisting of 1 pair of 1 patient, combining

different modalities which are listed as follows:

1. CT/MRI

One group contains at one image pair: one pair from one

patient.

A. Experimental Setup

We implemented this technique in MATLAB R2009b and

experiments were done on a Laptop with 2.10-GHz CPU

and 3-GB RAM. Parameters of PCNN were set as k × l =

3× 3, αT = 0.2, VT = 20, W = [0.7071 1 0.7071; 1 0 1;

0.7071 1 0.7071], and N = 200. The size of the window

for computing the LAE and the LIE was set as 3 × 3.

The performance and experimental results of the

proposed scheme are evaluated by various quantitative

measures like [12]:

1. Entropy

The entropy(EN) of an image is a measure of information

content present in the image. It is the average number of

bits needed to quantify the intensities in the image.

Entropy is defined as:

EN= ∑ (P*log2(1/P)) (12)

Where 'P' contains histogram counts returned from imhist.

If 'EN' of the fused image is higher than the source

images, then it indicates that the fused image contains

more information.

2. Spatial Frequency

The Spatial Frequency (SF) is the number of cycles that

fall within one degree of visual angle. A grating of high

value of spatial frequency, most of the cycles within each

degree of visual angle contains narrow bars. A grating of

low value of spatial frequency, very few cycles within

each degree of visual angle contains wide bars. As spatial

frequency is defined in terms of visual angle, it changes

with viewing distance. As this distance is decreased, each

bar casts a larger image; thus, the spatial frequency

decreases as the distance decreases.

Spatial frequency (SF) is used to measure the overall

activity and clarity level of an image. Value of SF should

be high to get better fusion result.

SF= (CF)2 + (RF)2(13)

This frequency indicates the overall activity level in the

fused image.

3. Standard Deviation

Standard Deviation (STD) is a measure of the average

distance between the values of the data in the set and the

mean. Standard deviation (STD) is used to measure the

image contrast (high STD means better contrast).

The visual quantitative result for a pair of source images

is subjectively and objectively prescribed in this paper.

The CT image in Fig. 4(A) shows the bone structure, and

the MRI image in Fig. 4(B) displays the soft tissue

information. The performance of the NFHF-NSCT

method is also compared with the effectiveness of other

MGA-tools such as contourlet (CNT) and curvelet

(CVT).

B. Subjective analysis:

We have subjectively evaluated the effectiveness of the method used. All the results are displayed as shown below:

Page 6: A Neuro-Fuzzy System for Medical Image · PDF file · 2015-05-13A Neuro-Fuzzy System for Medical Image Fusion ... can develop the ground functions using the redundancy so ... Neuro

International Journal of Advanced Engineering Research and Technology (IJAERT) Volume 3 Issue 4, April 2015, ISSN No.: 2348 – 8190

150

www.ijaert.org

(A) (B) (C)

(D) (E) (F)

(G) (H) (I) (J)

(K) (L) (M)

Fig.4. Visual result of 1 group of source images: CT and MRI images (A) and (B) respectively. Low pass sub-band

images of the source images (C) and (D) respectively. (E), (F), (G), (H), (I), (J) represent high pass sub-band images of

the source images. (K) And (L) represent fusion of all low pass sub-band images and fusion of high pass sub-band images

respectively. Final fused image is obtained: (M).

Table I

PERFORMANCE EVALUATION OF SYSTEM

Source image Fused image

SF EN STD SF EN STD

A 0.4248 2.2249 80.9892 121.8

391

6.769

5

91.05

01

Page 7: A Neuro-Fuzzy System for Medical Image · PDF file · 2015-05-13A Neuro-Fuzzy System for Medical Image Fusion ... can develop the ground functions using the redundancy so ... Neuro

International Journal of Advanced Engineering Research and Technology (IJAERT) Volume 3 Issue 4, April 2015, ISSN No.: 2348 – 8190

151

www.ijaert.org

B 0.3529 6.6328 90.9462

The detailed quantitative evaluations are given in Table

I. Columns 2 to 4 in Table I; show the SF, EN, and STD

of the source medical images, respectively. The values

of these quantitative measures of the fused images

obtained by this technique are given in columns 5 to 7 of

Table I. The “bold” values indicate the highest values in

Table I, for that quantitative measure. The highest values

of SF for the image indicate that the fused image

obtained by our method have more activity and clarity

level than the source images. Similarly, the highest

values of EN for the fused image indicate that the fused

image obtained by this scheme have more information

content than the source images. We can also see from

Table I that the STD values of the fused imageare higher

than their corresponding source images. This shows that

our method produces higher contrast fused images.

Therefore, it is evident from Table I, that the fused

image obtained by the method used is clearer,

informative and have higher contrast which is helpful in

visualization and interpretation

VII. CONCLUSION We present a novel MIF method based on a hybrid

neuro-fuzzy system in the NSCT domain. We exploit the

advantages of the NSCT, RPCNN and fuzzy logic to

overcome the drawbacks of the traditional MIF schemes

and to integrate as much information as possible into the

fused images. The linking strengths of the neurons in the

RPCNNs are adaptively computed based on the fuzzy

characteristics of the image, which results in high-

quality fused images. The experimental results show that

our method can preserve more useful information in the

fused image with higher spatial resolution and less

difference to the source images. The effects of the

different fusion rules, as well as the new techniques to

compute the parameters of the neurons of the PCNN, are

some of the future scopes of this technique.

ACKNOWLEDGMENT We would like to thank Dr. Vinod Pande (Senior

Manager - Diagnostics & Imaging, Sahyadri Hospitals

Pune, India) for guiding us and for the subjective

evaluation of the fusion results.

We would also like to thank Prof. V. N. Ghodke (Our

Guide and Prof. at AISSMS IOIT, Pune, India) for

guiding and helping us.

REFERENCES

[1] Y. Yang, D. S. Park, S. Huang, and N. Rao, “Medical

image fusion via an effective wavelet-based approach,”

EURASIP J. Adv. Signal Process., vol. 2010, pp. 44:1–

44:13, 2010.

[2] G. Pajares and J. M. de la Cruz, “A wavelet-based

image fusion tutorial,” Pattern Recognit., vol. 37, no. 9,

pp. 1855–1872, 2004.

[3] S. Das, M. Chowdhury, and M. K. Kundu, “Medical

image fusion based on ripplet transform type-I,” Progr.

Electromagn.Res. B, vol. 30, pp. 355–370, 2011.

[4] S. Das andM. K. Kundu, “NSCT-based multimodal

medical image fusion using pulse-coupled neural

network and modified spatial frequency,” Med. Biol.

Eng. Comput., vol. 50, no. 10, pp. 1105–1114, 2012.

[5] R. Eckhorn, H. J. Reitboeck, M. Arndt, and P. Dicke,

“Feature linking via synchronization among distributed

assemblies: Simulations of results from cat visual

cortex,” Neural Comput., vol. 2, pp. 293–307, 1990.

[6] Q. Xiao-Bo, Y. Jing-Wen, X. Hong-Zhi, and Z. Zi-

Qian, “Image fusion algorithm based on spatial

frequency-motivated pulse coupled neural networks in

nonsubsampled contourlet transform domain,”

ActaAutomaticaSinica, vol. 34, no. 12, pp. 1508–1514,

2008.

[7] Srinivasa Rao D, Seetha M, Krishna Prasad MHM,

“Comparison of fuzzy and Neuro Fuzzy Image Fusion

Techniques and Its Applications”, International Journal

of Computer Applications (0975-8887) Vol. 443-No.20,

April 2012.

[8] M. K.Kundu and S. K. Pal, “Thresholding for edge

detection using human psycho visual phenomena,”

Pattern Recognit.Lett., vol. 4, pp. 433–441, 1986.

[9] M. K. Kundu and S. K. Pal, “Automatic selection of

object enhancement operator with quantitative

justification based on fuzzy set theoretic measure,”

Pattern Recognit.Lett., vol. 11, pp. 811–829, 1990.

[10] H. Cheng and H. Xu, “A novel fuzzy logic

approach to contrast enhancement,” Pattern Recognit.,

vol. 33, no. 5, pp. 809–819, 2000.

[11] T. Kadir and M. Brady, “Saliency, scale and image

description,” Int. J. Comput.Vis., vol. 45, no. 2, pp. 83–

105, 2001.

[12] L. Yang, B. L. Guo, and W. Ni, “Multimodality

medical image fusion based on multiscale geometric

analysis of contourlet transform,” Neurocomputing, vol.

72, nos. 1–3, pp. 203–211, 2008.

[13] G. H. Qu, D. L. Zhang, and P. F. Yan, “Information

measure for performance of image fusion,” Electron.

Lett., vol. 38, no. 7, pp. 313–315, 2002.

Page 8: A Neuro-Fuzzy System for Medical Image · PDF file · 2015-05-13A Neuro-Fuzzy System for Medical Image Fusion ... can develop the ground functions using the redundancy so ... Neuro

International Journal of Advanced Engineering Research and Technology (IJAERT) Volume 3 Issue 4, April 2015, ISSN No.: 2348 – 8190

152

www.ijaert.org

[14] Baldi P., Hornik K. "Neural Networks and Principal

Component Analysis: Learning from Examples without

Local Minima", Neural Networks, Vol. 2, No. 1, 1989,

pp. 53-58.

[15] Carreira-Perpinán M. Á. "Continuous Latent

Variable Models for Dimensionality Reduction and

Sequential Data Reconstruction" [PhD Thesis],

University of Sheffield, 2001.

[16] Cottrell G. W., Munro P. W., Zipser, D. "Image

Compression by Back Propagation: a Demonstration of

Extensional Programming", Advances in Cognitive

Sciences, Vol. 2, Norwood, Abbex, 1988.

[17] Cox, T. F., Cox, M. A. "Multidimensional Scaling",

Chapman & Hall, London, 1994.

[18] V.P.S. Naidu, J.R. Rao, "Pixel-level Image Fusion

using Wavelets and Principal Component

Analysis",Defence Science Journal, pp. 338 -352, 2008.

AUTHORS PROFILE

Venkat.N.Ghodke is Assistant professor in Electronics

and Telecommunication Department at AISSMS

Institute of Information Technology, Pune .He has

worked in various institutes as UG and PG guide for

image and embedded system design related area. Also he

has published books and papers in international journals.

VaishaliFegade is from department of Electronics

&Tele-communication at AISSMS Institute of

Information Technology, Pune.

Monika Barmukh is from department of Electronics &

Tele-communication at AISSMS Institute of Information

Technology, Pune.

RajshreeBhoir is from department of Electronics &

Tele-communication at AISSMS Institute of Information

Technology, Pune.