Comp Proceedings

Download Comp Proceedings

Post on 26-Oct-2014

113 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

<p>Medical Image Fusion Based on Wavelet TransformGanesh J. JagtapLecturer in Information Technology,SVPMs Institute of Technology &amp; Engineering, Malegaon(Bk),Tal: Baramati, Dist: Pune 413102Abstract:If the analysis of the characters of CT medical image is carried out, it seems that a novel method for this particular image fusion is using discrete wavelet transform and independent component analysis.Firstly, eachof CTimagesisde-composedby2-Ddiscretewavelettransform. Thenindependentcomponent analysis is usedtoanalyze the wavelet coefficients indifferent level for acquiring independent component. At last, the use of wavelet reconstruction for synthesizing one CT medicalimage, which could contain more integrated accurate detail information of different soft tissue such asmusclesandbloodvesselsismade. Bycontrast, theefficiencyof methodisbetterthanweighted average method or laplacian pyramid method in medical image fusion field.Nowadays, the studyofmultimodalitymedicalimageisveryimportantbecauseofincreasingclinicalapplication demanding. We canget the different benefits fromthe informationinthe images of differentmodalities.1. INTRODUCTIONIntherecentyears, thestudy of multimodality medical image fusion attracts much attention with the increasing of clinic application demanding. Radiotherapy plan, for instance, often benefits from the complementary information in images of different modalities. Dose calculation is based on the computed tomography (CT) data, while tumor outlining is often better performed in the corresponding magnetic resonance (MR) scan. For medical diagnosis, CT provides the best information on denser tissue with less distortion, MRI provides better information on soft tissue with more distortion, and PET provides better information on blood flowand flood activity with lowspace resolution in general. With more available multimodalitymedicalimagesinclinicalapplications, theideaofcombiningimagesfrom different modalities becomes very important and medical image fusion has merged as a new and promising research field. The general object of image fusion is to combine the complementaryinformationfrommultimodalityimages.Someimagefusionmethodshave been introduced in the literatures, including statistical method (Bayesian's decision), Fuzzy set method, neural network method, Laplacian pyramid method and wavelet transform method. It should be noted that the fusion methods are application-dependent. 2. LITERATURE REVIEW In the signal processing theory, the nature of non-periodic and transient signals cannot easily be analyzed by conventional transforms. So, an alternative mathematical tool-wavelet transform, developed by MATLAB is used to extract the relevant time-amplitude information from a signal.Woei-Fuh Wang ital[1] worked on PET-MRI image registration and fusion, providing fusedimagewhichgivesbothphysiological andanatomical informationwithhighspatial resolution for use in clinical diagnosis and therapy. GemmaPiella[2] presentsnewapproachfor accessingqualityinimagefusionby constructing ideal fused image, used it as reference image and compare with the experimental fused results. Mean Squared Matrices are widely used for these comparisons. Paul Hill, NishanCanagarajah and Dave[3] Bull have introduced novel application of shift- invarientand directionally selective Dual Tree Complex wavelet transform (DT-CWT) to image fusion, providing improved qualitative and quantitative results.Myungjin Choi, Rae Young Kim, Myeong-Ryong NAM, and Hong Oh Kim[4] proposed thecurvelet transform for imagefusion.Thecurvelet-basedimagefusionmethodprovidesricherinformationinthe spatialandspectraldomainssimultaneously. They performed Landsat ETM+ image fusion and found optimum fusion results. Yu Lifeng, Zu Donglin, Wang Weidong , Bao Shanglian[5] have proposedintegrated scheme to fuse medical images from different modalities. First they have registered images using SVD-ICP (Iterative Closest Points) method and evaluated the different fusion results by applying different selection rules. QUXiaohaveassociatedNSCT(NonSubsampledCountourlet Transform) with PCNN (Pulse Coupled NeuralNetworks) and employed in image fusion. Spatial frequency in NSCT domains is input to motivate PCNN and coefficients in NSCT with large firing times are selected as coefficients of the fused image[6].3. PROBLEM DESCRIPTION AND SPECIFICATION3.1. Problem Statement Take a more than two images reconstruction using Wavelet Transform to these images and theprocess of combiningrelevant information from two or more images into a single image. The resulting image will be more informative than any of the input images. 3.2. Block Diagram Figure. 3.1. Image Fusion Scheme3.3. Module Wise DescriptionFirst, theCTandMRI images tobefusedaredecomposedbydiscretewavelet transform. Theimages shouldbedecomposedintosamelevels. Thesesub-bandimages constitute the details of the original images.UsingIDWT, havetocombinetheinformationfromeachimagebyfusionrules, taking significant components from each level.3.3.1. Multiresolution AnalysisAlthough the time and frequency resolution problems are results of a physical phenomenon (the Heisenberg uncertainty principle) and exist regardless of the transform used, it is possible to analyze any signal by using an alternative approach called the multiresolution analysis (MRA).MRA, as implied by its name, analyzes the signal at different frequencies with different resolutions. Every spectral component is not resolved equally as was the case in the Short Time Fourier Transform (STFT). MRA is designed to give good time resolution and poor frequency resolution at high frequencies and good frequency resolution and poor time resolution at low frequencies. This approach makes sense especially when the signal at hand has high frequency components for short durations and low frequency components for long durations. Fortunately, the signals that are encountered in practical applications are often of this type. The wavelet transform is a powerful tool for multiresolution analysis. The multiresolution analysis requires a set of nested multiresolution sub-spaces as illustrated in the following figure: Figure. 3.2. Nested Multiresolution SpacesTheoriginalspaceV0canbedecomposedintoalorresolutionsub-spaceV1, the differencebetweenV0andV1canberepresentedbythecomplementarysub-spaceW1. Similarly, can continue to decompose V1into V2and W2. The above graph shows 3-level decomposition. For an N-level decomposition,will obtain N+1 sub-spaces with one coarsest resolution sub-space Vn and N difference sub-space Wi, i is from 1 to N. Each digital signal in the space V0 can be decomposed into some components in each sub-space. In many cases, it's much easier to analyze these components rather than analyze the original signal itself. 3.3.2. Filter Bank AnalysisThe correspondingrepresentation infrequencyspace is intuitively showninthe followinggraph:Canapplyapairoffilterstodividethewholefrequencybandintotwo subbands, and then apply the same procedure recursively to the low-frequency band on the current stage. Thus, it is possible to use a set of FIRfilters to achieve the above multiresolution decomposition. Here is one way to decompose a signal using filter banks.Figure. 3.3. Multiresolution frequency bandsThe effect of this shifting and scaling process is to produce a time-scale representation, as depicted in Figure 4. As can be seen from a comparison with the STFT, which employs a windowed FFT of fixed time and frequency resolution, the wavelet transform offers superior temporal resolution of the high frequency components and scale (frequency) resolution of the lowfrequency components. This is often beneficial as it allows the lowfrequency components, which usually give a signal its main characteristics or identity, to be distinguishedfromoneanother interms of their frequencycontent, whileprovidingan excellent temporal resolution for the high frequency components which add the nuance's to the signals behavior. Unlike STFT, in Wavelet Transform, the width of the wavelet function changes with eachspectral component. TheWavelet Transform, at highfrequencies, gives goodtime resolution and poor frequency resolution, while at low frequencies, the Wavelet Transform gives good frequency resolution and poor time resolution.3.3.3. Discrete Wavelet Transform When analyzing signals of a non-stationary nature, it is often beneficial to be able to acquireacorrelationbetweenthetimeandfrequencydomains of asignal. TheFourier transform, provides information about the frequency domain, hover time localized information is essentially lost in the process. The problem with this is the inability to associate features in the frequency domain with their location in time, as an alteration in the frequency spectrum will result in changes throughout the time domain. In contrast to the Fourier transform, the wavelet transform allows exceptional localization in both the time domain via translations of themotherwavelet, andinthescale (frequency) domain via dilations .The translation and dilationoperationsappliedtothemother wavelet areperformedtocalculatethewavelet coefficients, which represent the correlation between the wavelet and a localized section of the signal. The wavelet coefficients are calculated for each wavelet segment, giving a time-scale function relating the wavelets correlation to the signal. A wavelet family with mother wavelet (x) consists of functions a,b(x) of the form,</p> <p>,_</p> <p> ab xax b a1) ( , (1) Where b is the shift or center of a,b, and a is the scale. Alternatively, the scaling factor 1/a may be used. If a &gt; 1, then a,b is obtained by stretching the graph of , and if a &lt; 1, then the graphofiscontracted. Thevalueacorrespondstothenotionof frequencyinFourier analysis.Given a mother wavelet, an orthogonal family of wavelets can be obtained by properly choosing a = am0 and b = nb0, where m and n are integers, a0 &gt; 1 is a dilation parameter, and b0 &gt; 0is a translation parameter. To ensure that wavelets a, b, for fixed a, cover f(x) in a similar manner as m increases, choose b0 = am0. For rapid calculation of the wavelet coefficients, choose = 1 and a0 = 2. Note that by choosing b0 &lt; 2m, obtain a redundant wavelet family, whereas choosing b0 &gt; 2m leads to an incomplete representation of the transformed function. Therefore b0 = 2m is the optimal choice, and in fact leads to an orthogonal family. With these choices of a and b, the DWT of a function f(x) is given by,</p> <p> dx x f x n m f n m n m Wf ) ( ) ( , , , ) , ( (2) Where,</p> <p>,_</p> <p> mmmn xx n m222 ) ( ,2 / (3) The inverse transform is given by,</p> <p>n mn m Wf x n m x f,) , ( ) ( , ) ( (4) Itshouldbenotedthateventhoughtheintegral definingWf(m, n)isonanunbounded interval, it is effectively on a finite interval if the mother wavelet has compact support, and therefore can easily be approximated numerically.A function (x) is a wavelet if it satisfies these conditions,3.4.Pixelbased Image Fusion MethodIn this fusion scheme the subband signal of fused image is acquired by simply picking high frequency coefficients with larger absolute value. )'&gt;| ) , ( | | ) , ( | ), , (| ) , ( | | ) , ( | ), , () , (p A C p B C p B Cp B C p A C p A Cp F Cj j jj j jj</p> <p>(5) In the lost special resolution, the subband signal is Cj(F,p) acquired byaveraging Cj(A,p) and Cj(B,p) of A and B. Cj(F,p)=0.5*Cj(A,p)+0.5*Cj(B,p) (6)4. BASIC SYSTEM IMPLEMENTATION 4.1. Algorithm For Pixelbased11) Read the CT image.2) Read MRI image. 3) Resize the both images to 256x256.4) Decompose the each image at one level using DWT. 5) Compare the absolute values for each pixel and the pixel with higher value is selected in the finalsubband.6) Reconstruct the fused image using IDWT (inverse discrete wavelet transform) using the same wavelet filter used for decomposition.4.2. Resultant Images Fusion at single level using db10</p> <p>Figure. 4.1 a) CT Image, b) MRI Image, c) Pixelbased Fusion Image, .4.3. ResultsTable 4.1: Fusion at Single level using bior.4.4Method Standarddeviation Entropy OCEPixelbased1 59.340 6.7049 1.47715. CONCLUSIONIn the project different methods are compared for the fusion of CT and MRI images based on DWT. Standard deviation ,entropy ,overall cross entropy are the criterias used for evaluating the fusion result. For the medical image fusion technique based on the multiresolutionwavelet decompositioniswonderful trade-offbetweenspectral andspatial information. Among the entire methods pixel based 1 is having the highest entropy. Gradient and Convolution based methods are also having good performance, as it have high entropy, less OCE and good standard deviation. Pixel based 2 is having good visual perception.In comparison of different wavelet filters applied for the decomposition and reconstruction 'db5',db7,'db10','bior4.4' performance is good, since the reconstruction becomes better. Multilevel decompositionfusion is havingbetter results at the cost of increased computations. Fused image provides the complementary features which will make the diagnosis easy.6. REFERENCES[1]. ZhimingCui, GuangmingZhang, JianWuMedicalImageFusionBasedonWavelet TransformandIndependent Component Analysis2009International Joint Conferenceon Artificial Intelligence 978-0-7695-3615-6/09 2009 IEEE DOI 10.1109/JCAI.2009.169. IEEE Computer Society.[2].Progress in Electromagnetic Research C, Vol. 3, 215224, 2008 CURVELET FUSION OF MR AND CT IMAGES F. E. Ali, I. M. El-Dokany, A. A. Saad and F. E. Abd El-Samie Department of Electronics and Electrical Communications Faculty of Electronic Engineering Menoufia University 32952, Menouf, Egypt.[3].H. Li, B.S. Manjunath, andS.K. Mitra, Multisensorimagefusionusingthewavelet transform, Graphical Models and Image Processing 57, 235-245 (1995)[4]. W. B. Penne baker, J. L. Mitchell, JPEG - still image data compression standards, Van No strand Reinhold, 1993.[5].Paul Hill, Nishan Canagarajah and Dave Bull Image Fusion using Complex Wavelets BMVC 2002[6]. Independent ComponentAnalysisAlgorithmsandApplicationsAapoHyvrinenand ErkkiOjaNeuralNetworksResearchCentreHelsinkiUniversityofTechnologyP.O. Box 5400, FIN-02015 HUT, Finland Neural Networks, 13(4-5):411-430, 2000.[7].D. A. Bluemke et al., Detection of Hepatic Lesions in Candidates for Surgery: Comparison of Ferumoxides-Enhanced MR Imaging and Dual-Phase Helical CT, AJR 175, pp. 16531658, December 2000.[8]. W. D. Withers, A rapid entropy coding algorithm, (Technical report, Pegasus Imaging Corporation).[9].C. S. Kidwell et al., Comparison of MRI and CT for Detection of Acute Intra-cerebral Hemorrhage, JAMA, Vol. 292, No. 15, pp. 1823-1830, 2004.[10]. The Wavelet Tutorial By Robi Plokar.[11]. M.M and A.S. Willsky, a multiresolution methodology for singal level fusion and data assimilation application to remote sensing.Proc.IEEE,85:164-180, 1997.PAPER PRESENTATION ONBRAIN GATE SYSTEMByMr. Kumbhar S.l.Computer DepartmentSBPCOE, Indapur.meet_satishkumbhar@rediffmail.comABSTRACT: The mind-to-movement sys...</p>

Recommended

View more >