performance analysis of medical image watermarking using dt … · 2018-07-29 · 2 faculty of...
TRANSCRIPT
1
Performance Analysis of Medical Image Watermarking Using DT CWT
and SVD Transforms With Steganalysis
1S.Priya, 2R.Varatharajan
1 Faculty of Electronics and Communication Engineering, Bharathiyar College of Engineering and Technology, Karaikal,
UT, Thiruvettakudy, Puducherry, Tamil Nadu, India
2 Faculty of Electronics and Communication Engineering, Sri Ramanujar Engineering College, Chennai, India
Abstract: Invisible watermarking plays an important role in medical field in order to embed the payload such as secret image or
text into source medical images. In this paper, watermarking is done on source medical image which uses Dual Tree Complex
Wavelet Transform (DT-CWT) transform and Singular Value Decomposit ion (SVD) transform. The DT-CWT transform is
applied over the cover image or source image in order to obtain low and high frequency sub band coefficients matrix. Next, SVD
transform is applied on the high frequency sub band coefficients, which is obtained from DT-CWT transformation. SVD is
applied on the payload which is in the format of either authentication image or text image. Embedded image is obtained by
coefficients matrix multiplication method (SVD coefficients of both cover and payload image) which also generates the key
pattern using the coefficients of SVD transformed sub bands. The performance of the proposed watermarking algorithm is
analyzed in terms of Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE) and Information Entropy (IE).
Key Words: Watermarking, medical filed, SVD, subbands, payload
I. INTRODUCTION
The reproduction of data and multimedia information
is important due to the rapid development of internet media
in the form of d igital media. Hence, the authorship is
required for every dig ital content which are passing in the
internet medium. In this regard, copyright protection is an
important topic to protect the d igital informat ion of the
individuals. Watermarking is such kind of copyright
protection methodology which is in the form of dig ital.
This methodology inserts the information of the ownership
such as emblem of the organization or details about
ownership into the source media which is to be secured
from the other individuals. Th is watermarking methodology
is categorized into visible and invis ible watermarking. In
case of visible watermarking, the embedded information
into the source data is visible to others. In case of invisible
watermarking, the embedded information is not visible to
others. In this mean t ime, the watermarking is also
categorized into lossy and lossless watermarking based on
the retrieval capability of the embedded information during
their recovery process. The quality of the image is affected
in case of visible watermarking due to the translucent
property of the embedded contents. In recent years,
numerous visible watermarking methodologies proposed.
Most of the methods [1, 2] used permanent visible
watermarking. Television media and digital library are the
International Journal of Pure and Applied MathematicsVolume 119 No. 17 2018, 891-902ISSN: 1314-3395 (on-line version)url: http://www.acadpubl.eu/hub/Special Issue http://www.acadpubl.eu/hub/
891
2
best examples for v isible watermarking. In invisible
watermarking, the quality of the image is not affected by
the embedded contents. These kind of watermarking
methods are used for securing the dig ital information. The
process of watermarking in both cases of visible and
invisible is categorized into spatial domain method and
transforms domain method. In spatial domain method, the
intensity or color value of the pixels in source image is
modified with respect to pixels in watermark image. Least
Significant Bit (LSB) is the example for such method. This
method is not robust against various attacks. The
watermarking is performed in transformation mode in
transform domain method. The transformation such as
Discrete Wavelet Transform or Singular Value
Decomposition (SVD) is applied on source and watermark
images. The watermark image is embedded into
transformed image and then inverse transform is applied to
get the watermarked image. It is robust to various noises
and attacks.
Fig. 1 shows the generic arch itecture of watermarking
framework, which embeds the watermark into cover image.
Fig. (1). Generic watermarking framework.
This paper is structured as, section 2 exp lains the
conventional methodologies for invisible watermarking,
section 3 proposes a framework for invisible watermarking
using mult i resolution transforms, section 4 discusses the
experimental results and section 5 concludes the paper.
II. LITERATURE SURVEY
Salama et al. (2016) p roposed hybrid fusion technique
for watermarking the secret image into source image. This
methodology used Discrete Cosine Transform (DCT) and
Discrete Wavelet Transform (DWT) for key based
watermarking method in mult i resolution transform mode.
The imperceptibility of the watermarking technique was
improved by adopting and combining these two
transformation modes. J. Singh et al. (2016) used wavelet
transform based watermarking methodology. The DWT
transform was applied on the source and watermark images
and then LL sub band was obtained from both images.
Next, DCT transform was applied on LL sub band images
and scaling factor was adjusted in order to embed the
watermark image into source image. The authors achieved
33.45 dB PSNR for their proposed algorithm. Maninder
Kaur et al. (2016) used both spatial domain and frequency
domain methods for embedding the watermark image into
Cover Image
Watermarking
Algorithm
Watermark
Image
Embedded
Image
International Journal of Pure and Applied Mathematics Special Issue
892
3
source image. The authors used least significant bit
technique to embed the watermark image into source
image. The authors achieved 37.6758 dB PSNR for their
proposed method.
Srilakshmi et al. (2016) used DWT and SVD t ransform
for watermarking the secret image into source image. The
DWT transform was applied on the source image and SVD
transform was applied on the secret image in order to obtain
coefficients. These coefficients were embedded us ing the
scaling factor by adjusting the values in coefficient matrix.
Tsung-Yuan et al. (2010) developed an algorithm for
lossless visible watermarking in both cases of generic and
translucent categories. The authors used two-fold
monotonically technique to overcome the overflow pixel
problem during watermarking process. The watermarking
process was done in spatial domain which embeds the
generic and translucent watermarks directly into source
image. The authors tested their proposed watermarking
algorithm on d ifferent standard benchmark source and
watermark images. Li et al. (2010) proposed saliency based
watermarking methodology using multi resolution
transformation. The authors used wavelet transform for
embedding the watermark image into source image. The
wavelet transform was applied on the source and watermark
images and then orientation map was constructed on each
subband of the wavelet transformed images.
The following points are observed from the literature
survey. They are listed below as,
Most of the watermarking techniques were based on
DWT and SVD.
No noise reduction technique was used during
watermarking process.
Steganalysis was not performed in most of the
conventional methods.
III. MATERIALS AND METHODS
A. Materials
In this paper, brain images are obtained from open
access dataset BRATS (Multimodal Brain Tumor
Segmentation) challenge. This dataset contains various
MRI modalities brain images in various categories. This
dataset have 230 brain MRI images which are categorized
into normal which do not contain any abnormal t issues and
abnormal which contain abnormal lesions . In this paper,
100 brain images (70 from normal category and 30 from
abnormal category) are used as source or cover images.
B. Methods
The proposed watermarking and its ext raction
procedure using multi resolution transforms are detailed in
Fig. 2. The DT-CWT transform is applied over the cover
image or source image in order to obtain low and high
frequency sub band coefficients matrix. Next, SVD
transform is applied on the h igh frequency sub band
coefficients, which is obtained from DT-CWT
transformation. SVD is applied on the payload which is in
the format of either authentication image or text image.
Embedded image is obtained by coefficients matrix
multip licat ion method (SVD coefficients of both cover and
payload image) which also generates the key pattern using
the coefficients of SVD transformed sub bands. This key is
used in receiver side in order to extract the payload.
International Journal of Pure and Applied Mathematics Special Issue
893
4
Fig. (2). Proposed watermarking and its extraction methodology using multi resolution transforms.
Fig. (3). (a) Cover image (b) Payload image.
Fig. 3(a) shows the cover or source image which is
obtained from open access dataset and Fig. 3 (b) shows the
payload image which is the text image or authentication
image.
Dual Tree- Complex Wavelet Transform
In the DT-CWT, the cover image is passed through the
low pass filter (LPF) and high pass filter (HPF) as shown in
Fig. 4. The filtered sub bands are down-sampled by a factor
of 2. This process converts the cover image into low pass
(Approximate) and high pass (Detail) coefficients. This
procedure forms the first level of decomposition of the
cover image. For the next level of decomposition, the
approximate coefficients are again passed through the HPF
and LPF, and the same process is repeated to obtain the
next level of filter coefficients . After each level of
decomposition, the bandwidth obtained is half of the
bandwidth of the previous level.
International Journal of Pure and Applied Mathematics Special Issue
894
5
Fig. (4). Decomposition of cover image using 4-level DTCWT.
In this paper, the cover image is subjected to the DT-
CWT for up to four levels of decomposition. The dual tree
approach uses two real DWTs: one for acquiring the real
part of the transform, and the other for the imaginary part.
Real wavelet is associated with the upper tree, and
imaginary wavelet is associated with the lower t ree.
Each t ree uses different sets of filters that satisfy perfect
reconstruction conditions. The h0(n) is a low-pass and h1(n)
is a high-pass filter for the upper filter bank, and g0(n) and
g1(n) are the low and the high-pass filters for the lower
filter bank. The relat ion between upper filter and lower
filter bank is a half-sample delay as described in Eqn. (1).
(1)
The dual tree CWT decomposes the cover image into
complex wavelet and scaling function. The complex
wavelet function and its scale function of DT-
DWT are represented as,
(2)
(3)
Where, and denotes real and imaginary wavelet
function and and denotes real and imaginary
scale function.The time domain transformation is
represented by ‘t’. The real and imaginary parts of the
complex wavelet function is expressed as,
(4)
(5)
The real and imaginary parts of the complex scale
function is expressed as,
(6)
(7)
denotes the low pass filter of the complex scale
function and denotes high pass filter of the complex
wavelet function.
The proposed watermarking methodology using DT-
CWT is carried out using MATLAB DT-CWT toolbox. The
extracted dual tree complex wavelet coefficients of fourth
level provide a compact representation of the cover image
at different frequency bands which represents the
distribution of frequencies components of cover image in
both time and frequency domain. Fig. 5 shows the sub band
International Journal of Pure and Applied Mathematics Special Issue
895
6
images of cover image using 4-Level DT-CWT decomposition.
Fig. (5). Sub band images of cover image using 4-Level DT-CWT decomposition.
Fig. 6 shows the SVD transformed sub band images on
high frequency coeffieincts of DTCWT transform. The sub
band image (S) has mutual and optimum information when
compared with other sub bands. Hence, ‘S’ sub band image
of both cover and payload images are used to produce
embedded image.
Fig. (6). SVD components of Cover image (S, U and V).
International Journal of Pure and Applied Mathematics Special Issue
896
7
Algorithm1: Embedding procedure of cover image with payload
Inputs: High frequency coefficients sub bands of DT-CWT (w1 and w2)
Output: Embedded image
Start
Determine low pass scaling factor using LS=0.09* w1
Determine high pass scaling factor using HS=0.09* w2
Apply SVD on LS and HS coefficients using the following procedure
[U1 S1 V1] = SVD (LS)
[U2 S2 V2] = SVD (HS)
Find Apha1 factor as α1= U1 * LS* V1.
Find Apha2 factor as α2= U2 * HS* V2.
Apply the same procedure on payload image in order to obtain the alpha factors.
Perform matrix multiplication between alpha factors of source and payload images.
Apply inverse DT-CWT transform on low frequency coefficients of DTCWT (low1) and alpha factors as,
Embedded image = icplxdual2D (low1, α1, α2).
End
Fig. (7). (a) Cover brain image (b) Embedded cover brain image.
Fig. 7(a) shows the cover brain image and Fig. 7(b)
shows the embedded cover brain image which has the
inbuilt payload information. It is clear from Fig. 7(a) and
Fig. 7 (b), both images are similar at perception view of
reader. The indexes of coefficients U1, S1, V1 and U2, S2,
International Journal of Pure and Applied Mathematics Special Issue
897
8
V2. The reverse procedure is applied on the embedded
image in o rder to obtain the payload image in receiver side.
Fig. 8 shows the extracted payload image from embedded
image.
Fig. (8). Reconstructed payload image
(62x85mm (150 x 150 DPI))
Steganalysis
The performance of the proposed watermarking
methodology is improved by detecting and removing the
impulse noises (salt and pepper noise) from the
watermarked image through steganalysis process. For the
analysis purposes, various intensities of impulse noises
which range from 0.1 to 0.9 are added into the embedded
image. This process is explained in the following steps.
Step 1:
Choose m*m pixels in watermarked image. The
number of row and column p ixels in sub window of
watermarked image is represented by m. The ‘m’ value
should be odd number to have the equal number of pixels in
both side of the current pixel which is to be denoised.
Fig. (9). Illustration of selection of directional pixels.
Step 2:
Select four directional pixels such as D1, D2, D3 and D4 as
shown in Fig. 9 through the center pixel which is to be
denoised.
Step 3:
Sort the pixels in each directional pattern after removing the
center pixel which is to be denoised.
International Journal of Pure and Applied Mathematics Special Issue
898
9
Step 4:
Remove the first and last pixels in each directional pattern.
Step 5:
Determine standard deviation of each d irect ional pattern
pixels and find the directional pattern which has low
standard deviation.
Step 6:
Find the scalar value (S) between each pixels in directional
pattern and the center pixel to be denoised using the
following equation as,
(8)
Where as, Dop
is the pixels in directional pattern and Xcp
is
the center pixel which is to be denoised.
Step 7:
Classify the center pixel into either noisy or noise free
based on the following criteria.
(9)
Where as, xor is the noise free pixel and xno is the noisy
pixel. The threshold value is represented by T and K is the
window size.
Step 8:
Apply adaptive median filter (Jiang et al. 2010) on
center pixel if it is classified as noisy pixel.
IV. RESULTS AND DISCUSSION
In this paper, MATLAB R 2014 is used as simulation
software to simulate the proposed watermarking and its
extraction methodology using mult i resolution transforms.
The brain images which are obtained from open access
dataset are used in this paper as source or cover image. In
this paper, 100 brain images are used as cover images and
brain or authentication images are used as payload images.
The performance of the proposed invisible watermarking
system using multi resolution transforms is analyzed in
terms of PSNR, MAE, MSE, IE, Bhattacharya Coefficient
and Normalized Histogram Coefficient.
A. PSNR and MSE
This parametric evaluates the quality of the extracted
payload image with respect to original payload image using
the following equations (10) and (11) as,
(10)
(11)
Where, ‘ ’ represents width of the original payload
image and ‘ ’ represents height of the original payload
image, respectively. is the original payload image
and is the extracted payload image, respectively.
B. Mean Absolute Error (MAE)
It defines the percentage of erro r in extracted payload
image with respect to original payload image and it is given
as,
(12)
C. Normalized Histogram Intersection Coefficient
(NHIC)
This performance metric gives count of the same value
of pixels between two h istograms. If the probability
distribution of two images is taken as P and Q respectively,
then Normalize Histogram Intersection coefficient is given
by,
(13)
Where A is original payload image and B is extracted
payload image. The range of value for this coefficient is
between 0 to1. Where 0 represents mismatch and 1
represents exactly match.
International Journal of Pure and Applied Mathematics Special Issue
899
10
D. Information Entropy (IE)
The information entropy (EN) is computed for the
decoded original source image. Its value should be 8 for
grey scale image. The security level of proposed system is
high if the value of information entropy is equal to 8.
(14)
Where, p(mi) represents the probability of the symbol mi
and t is the total number of symbols.
Table 1 shows the Performance analysis of the
proposed watermarking methodology over the set of 100
brain MRI images as source or cover images with different
payload images.
Table 1. Performance analysis of the proposed watermarking
methodology.
Performance evaluation metrics Experimental results
PSNR 53.32 dB
MSE 138.01
MAE 11.73
NHIC 0.001282
IE 7.3
The proposed watermarking algorithm achieves 53.32
dB of PSNR, 138.01 of MSE, 11.73 MAE, 0.001282 NHIC
and 7.3 IE, as depicted in Table 1.
Table 2. Performance comparisons of the proposed method
with conventional methods.
Methodology Year PSNR (dB)
Proposed method 2017 53.32
(DWT+SVD)
Hector Santoyo-Garcia et al.
(Bayer method)
2017 15.96
J. Singh et al.
(DWT+DCT)
2016 33.45
Kaur et al.
(Spatial+frequency domain)
2016 37.675
Table 2 shows the performance comparisons of the
proposed watermarking methodology with conventional
watermarking methodologies as Singh et al. (2016) and
Kaur et al. (2016). The proposed method used in this paper
achieves 53.32 dB of PSNR while other conventional
methods as Singh et al. (2016) achieved 33.45 dB of PSNR
and Kaur et al. (2016) achieved 37.67 dB of PSNR. Hector
Santoyo-Garcia et al. (2017) used Bayer method for
embedding the watermark logos into source image. The
image quality in the retrieval process was affected due to
the instability and non robustness of the method. Hence, the
authors achieved 15.96 dB of PSNR. Singh et al. (2016)
used the combinations of digital transformat ions DWT and
SVD for embedding the watermark pixels into source
image p ixels. The robustness of the watermarked image is
affected by using frequency domain transformation
techniques alone. This method obtained 33.45 dB of PSNR.
Kaur et al. (2016) used the integration transformation
approaches such as spatial and frequency domain
techniques for embedded process.
V. CONCLUS ION
In this paper, watermarking is done on source medical
image which uses Dual Tree Complex Wavelet Transform (DT-
CWT) transform and Singular Value Decomposition (SVD) transf
orm. The performance of the proposed watermarking algorithm is
International Journal of Pure and Applied Mathematics Special Issue
900
11
analyzed in terms of PSNR, MSE and Informat ion Entropy. The
proposed watermarking algorithm stated in this paper achieves
53.32 dB of PSNR and 7.3 IE.
REFERENCES
[1] Dimitrios Simitotoulos et al. Robust Image Watermarking based
on Generalized Radon Transformations. IEEE Transactions on
Circuits and Systems for Video Technology 2003; 13(8): 732-745.
[2] Hector Santoyo-Garcia, Eduardo Fragoso-Navarro, Rogelio
Reyes-Reyes, Clara Cruz-Ramos, and Mariko Nakano-Miyatake.
Visible Watermarking Technique Based on Human Visual System
for Single Sensor Digital Cameras. Security and Communication
Networks 2017; 2017(7903198): 1-18.
[3] Hu, Y, Kwong, S, Huang, J. An algorithm for removable visible
watermarking. IEEE Transactions on Circuits and Systems for
Video Technology 2006; 16(1): 129–133.
[4] Jiang, J, Shen, J. An Effective Adaptive Median Filter Algorithm
for Removing Salt & Pepper Noise in Images. Symposium on
Photonics and Optoelectronics, Chengdu, 2010; 1-4.
[5] Li, ZQ, Fang, T, Huo, H. A saliency model based on wavelet
transform and visual attention. Sci. China Inf. Sci. 2010; 53(4):
738–751.
[6] Lin, Y, Chen, YH, Chang, CC, Lee, JS. Contrast -adaptive
removable visible watermarking (CARVW) mechanism. Image
and Vision Computing 2013; 31(4): 311–321.
[7] Maninder Kaur, Nirvair Neeru. Digital Image Watermarking using
New Combined Technique. International Journal of Computer
Applications 2016; 145(2).
[8] Menze, B, et al., The multimodal brain tumor image segmentation
benchmark (Brats). IEEE Transactions on Medical Imaging 2015;
34(10): 1993–2024.
[9] Minh Do, Martin Vitterli. The Contourlet transform: An efficient
directional multi resolution image representation. IEEE
transactions on Image Processing 2005; 14(12).
[10] Salama, AS, Mokhtar, MA. Combined technique for improving
digital image watermarking. 2nd IEEE International Conference
on Computer and Communications (ICCC), Chengdu, China,
2016; 557-562.
[11] Singh, J, Patel, AK. An effective telemedicine security using
wavelet based watermarking. IEEE International Conference on
Computational Intelligence and Computing Research (ICCIC),
Chennai, India, 2016; 1-6.
[12] Srilakshmi, P, Himabindu, C. Image watermarking with path
based selection using DWT & SVD. IEEE International
Conference on Computational Intelligence and Computing
Research (ICCIC), Chennai, India, 2016; 1-5.
[13] s.p. jeyakokila1 , p. sumathi2, on soshearenergy of a tree of
diameter 4 part iii, international journal of pure and applied
mathematics, 2017,112 (5), 9-21
[14] tsung-yuan liu, wen-hsiang tsai. generic lossless visible
watermarking—a new approach. ieee transactions on image
processing 2010; 19(5).
International Journal of Pure and Applied Mathematics Special Issue
901
902