chapter 2 related work and analysis on secured...

30
13 CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSION 2.1 INTRODUCTION In this chapter the detailed literature review has been presented. For this, about 90 references are studied and analyzed. Hence, the limitation from each paper is studied well to focus better on the secure compression and the descriptions of each reference paper are as follows. 2.2 SURVEY ON MEDICAL IMAGE COMPRESSION Hospitals and medical centers generate a large amount of medical image data every day; they produce data specifically in the form of sequence of images. This definitely requires some considerable storage space. Hence, all form of medical images undergoes the process of medical image compression. From the above point, the work analysis has been done in the field of compression; however the compression is playing in different areas with different inputs. Likewise, information are discussed in this Section. Sanchez et. al. (2010) have proposed 3-D scalable compression method for medical images based on the optimized

Upload: others

Post on 05-Aug-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

13

CHAPTER 2

RELATED WORK AND ANALYSIS ON SECURED

COMPRESSION

2.1 INTRODUCTION

In this chapter the detailed literature review has been

presented. For this, about 90 references are studied and analyzed. Hence,

the limitation from each paper is studied well to focus better on the

secure compression and the descriptions of each reference paper are as

follows.

2.2 SURVEY ON MEDICAL IMAGE COMPRESSION

Hospitals and medical centers generate a large amount of

medical image data every day; they produce data specifically in the form

of sequence of images. This definitely requires some considerable

storage space. Hence, all form of medical images undergoes the process

of medical image compression. From the above point, the work analysis

has been done in the field of compression; however the compression is

playing in different areas with different inputs. Likewise, information

are discussed in this Section.

Sanchez et. al. (2010) have proposed 3-D scalable

compression method for medical images based on the optimized

Page 2: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

14

Volume Of Interest (VOI) coding. In which different remote clients may

get access to the compressed 3-D medical imaging data stored on a

central server. This method performs 3-D integer wavelet transform and

a modified Embedded Block Coder with Optimized Truncation

(EBCOT) with 3-D contexts for creating a scalable bit-stream. This

optimized VOI coding was obtained by an optimization technique which

reorders the output bit-stream after encoding. Hence the bits belongs to

VOI are decoded with greater quality. This Bit-stream reordering

procedure was based on the weighting model which includes the

position of the VOI and mean energy of the wavelet coefficient.

Sanchez et. al. (2009) have proposed a symmetry-based

technique for scalable lossless compression of 3-D medical image of

data which employs 2-D integer wavelet transform to decorrelate the

data, also an intraband prediction method for reducing the energy of the

sub-bands. The modified EBCOT, tailored in accordance with

characteristics of data, encodes the residual data generated after the

prediction to provide resolution and quality scalability.

Zuo-Dian et. al. (1999) have put forth a method for lossless

medical image coding called Adaptive Predictive Multiplicative

Autoregressive (APMAR). The APMAR was used to improve the

accuracy of prediction in encoded image blocks. Here, each block was

adaptively predicted by one of the seven predictors of the JPEG lossless

mode and local mean predictor. The residual values were processed by

the multiplicative autoregressive model with Huffman coding.

Chen et. al. (1994) have developed a model with multiple

context and arithmetic coding to enhance the performance of the

Page 3: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

15

compression. During the implementation, two quantizers were used with

large number of quantization levels. This involves many Magnetic

Resonance (MR) and Ultra Sound (US) images. The usage of multiple

contexts has improved the performance of the compression by 25% to

30% for MR images & 30% to 35 % for US images.

Nijim et. al. (1996) have proposed an approach for the lossless

compression of MR and US images to evaluate and compare the

performance with the lossless linear predictor and the lossless Joint

Photographic Experts Group (JPEG) standard. The advantages were the

computational complexity was greatly reduced and the coefficients of

the differentiator were known by the encoder and decoder. Wang et.

al.(2010) have proposed a 3D medical image compression for a low

complexity Reversible Integer Karhunen-Loe Transform (RKLT) which

is used for exploiting the correlation by integer wavelet transform in the

spatial domain. As the result of this the low RKLT provides comparable

lossless compression performance.

Gruter et. al (2000) presented a decomposition method for

development and generalization of the Morphological Subband

Decomposition (MSD). It is proved that the Rank Order Polynomial

Decomposition (ROPD) has a better lossless rate than the MSD. The

possibility of hybrid lossless compression has been done by using

ultrasound images. Das et. al. (1993 and 1994) have put forth a method

for lossless predictive coding using 2D space varying least squares

model for medical image. The performance of this method was

compared with the existing technique Hierarchal Interpolation (HINT).

The author has also proposed a model namely multiplicative auto

Page 4: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

16

regressive model for 2D medical images. Both the proposed schemes

achieved the higher compression rate.

Ramasamy et. al. (1996) have developed a technique to

compress medical data by employing two or more mutually non

orthogonal forms. In this technique the signal is first resolved into sub

signals. Each sub signal is compactly represented in a particular

transform domain. The resulting reconstructed signals samples were

rounded to the nearest integer and the modified residual error was

computed.

Wang and Huang (1996) jointly proposed a 3D medical image

compression method for computed-tomography (CT) and MR which

uses a separable non uniform 3D transform which employs one filter

bank within 2D slices and then a second filter bank on the slice

direction, which gives the optimum performance of most image sets.

Dilmaghani et. al. (2004) have developed an infrastructure for

progressive transmission and compression of medical image share the

initial image was refined by increasing the detailed information not only

in scale space but also in coefficient precision. This approach was based

on the Embedded Zero Tree Wavelet (EZW) algorithm, which offers the

tremendous amount of flexibility in bandwidth and in radiology imaging

environment, this performance was good than the standard JPEG

algorithm.

An adaptive image coding algorithm was proposed by Kaur et.

al. (2006) for medical US images. The image coder JTQVS-WS was

designed to unify the two approaches of image-adaptive coding which

includes Rate Distortion (R-D) optimized quantiser selection and R-D

Page 5: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

17

optimal threshold. The varying slope quantization strategy results in a

high improvement in the performance of the compression.

Roos et. al. (1988 and 1991) have developed a reversible data

compression for angiographic and MR medical images, which involves

decorrelation and coding. As the result of this the decorrelation was in

terms of entropy. The Huffman coding generally approximates these

entropy measures with few percent. The compression ratio was around 3

for angiographic images of 8-9 bits/pixel. The author also compares the

reversible interframe compression with decorrelation method. These

decorrelation methods were evaluated by applying them to sequences of

coronary X-ray angiograms, ventricle angiograms, liver scintigrams and

to video conferencing image sequence.

Lee et. al. (1993) have developed a Displacement Estimated

Interframe (DEI) coding for X-ray, CT and MR images. Here the

correlation between contiguous slices, a displacement compensated

difference image based on the previous image encoder is used. This

method gives 5% improvement in the compression ratio. When the

thickness of the slice decreased to 3mm, the performance gain is

increased to 10%.

Neural network architecture for medical images has proposed

by Pangiotidis et. al. (1996) introduced a Region Of Interest (ROI-

JPEG) technique. The selected ROI were coded with high quality.

Hence high compressions were achieved by retaining the image content.

Kassim et. al. (2005) have proposed an advanced method of

compression of 4D medical images with the combination of 3D Integer

Wavelet Transform (IWT) and 3D motion compensation. Set-

Page 6: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

18

Partitioning In Hierarchal Trees (SPIHT) algorithm was used for coding

the coefficient of wavelets, it is provided better performance of the 4D

medical image compression than the JPEG-2000. Also for 4D medical

image compression Sanchez et. al. (2008) published a paper based on

the Advanced Video Coding Scheme (AVC/H.264). By applying the

multi frame motion compensation recursively the redundancies in the

data have been reduced effectively.

Yang-Gi Wu et. al. (2001 and 2002) proposed papers on

medical image compression in different ways. From the survey of these

papers (Yang-Gi Wu et. al. 2001), it is mandatory that the medical

image must be compressed before transmission and storage. To achieve

more compression gain the image data in the spatial domain is

transformed into the spectral domain after the transformation. In another

paper (Yang-Gi Wu et. al. 2002) the author said that, the discrete cosine

transform was used as a band pass filter to decompose a sub-block into

equal sized-bands. High similarity property was found with the bit rate

of compression which can be reduced effectively.

Rao et. al. (1993) put forth a technique for pulse compression

to improve the image quality in medical US incorporated with the

prototype imaging and digital image processing system. Ramabadran

and Chen (1992) jointly developed a model with multiple contexts for

coding the decorrelated pixels. Three reversible compression methods

were used; they were Differential Pulse Code Modulation (DPCM),

Walsh-Hadamard Transform (WHT) and HINT for predictive

decorrelation, transform decorrelation and multi resolution decorrelation

respectively. Up to 40% of the performance has been enhanced

Page 7: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

19

significantly in MR, US and X-ray images. Gupta et. al. (2005) designed

a model for despeckling the medical images. Here the medical images

were transformed into the multi scale wavelet domain. It was proved

that the subband coefficients of the log-transformed ultrasound image

were modeled with the laplacian distribution to evaluate the

performance. The speckled image was filtered and then compressed by

using the state-of the-art JPEG2000 encoder.

Zhe Chen et. al. (2005) have developed a technique to

compress the Positron Emission Tomography (PET) image data in the

spatial and temporal domains by using the Optimal Sampling Schedule

(OSS) designs and cluster analysis methods. Result in the high data

compression ratio was greater than 80:1. Liang Shen and Rangayyan

(1997) presented a method called Segmentation Based Lossless Image

Coding (SLIC) which resulted in the average lossless compression up to

1.6 bits per pixel and up to 2.9 bits per pixel with the database of high

resolution medical images.

Riskin et. al. (1990) have developed the three techniques for

variable rate vector quantization for medical images. The first two

techniques were the extension of an algorithm to perform optimal

pruning in tree-structured classification. This algorithm finds the sub

tree of a given Tree Structured Vector Quantizer (TSVQ), distortion has

made with same or lesser average rate. The third technique was the joint

optimization of vector quantizer and noiseless variable-rate code. Hence

the result of this, the sub tree has variable depth, natural variable-rate

Page 8: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

20

Srinivasan et. al. (2010) have developed a technique for EEG

compression in that the Electroencephalogram (EEG) signal was

arranged in a matrix form. The algorithm employs the integer lifting

wavelet transform as the decorrelator in corporate with SPIHT as the

source code. Hence this technique resulted in the high performance of

rate-distortion and low delay in encoding than the one-dimensional (1D)

compression method.

Li Tan et. al. (2011) proposed bit-error aware lossless

compression algorithms for compression and transmission of waveform

data in noisy channels which consist of two stages such that the first

stage applies linear prediction and the second stage uses the developed

residue coder, bi-level block coding or interval entropy coding that

shows how to choose the optimal coding parameters for compressing the

residue sequence from the first stage. This achieves high compression

ratio and the recovered waveform have a good signal quality when the

bit error rate is equal to or less than 0.001.

According to Wei Liu et. al. (2010) through Selpian-Wolf

coding Lossless compression of encrypted sources can be gained in the

case of real world sources like images; the key to modify the efficiency

of compression is how well the exploitation of source dependency is

deployed. Selpian-Wolf decoder contains Markov properties which do

not work for grayscale images, so he suggested progressive compression

method which was used for compressing the encrypted image

progressively in resolution, in a manner that the decoder can find a low-

resolution version of image. The decoding can be done using studying

Page 9: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

21

the local statistics based on the low resolution version of image. It

ensures good performance theoretically and experimentally.

Ruedin et. al. (2010) suggested a method called nonlinear

lossless compressor which was specifically designed for multispectural

images containing of few bands and had greater spatial than spectral

correlation. A 2-D integer wavelet transform was used in this

compressor which reduced spatial correlation. Linear inter / intraband

predictions were performed by analyzing different models for the

statistical dependences of wavelet coefficients. This compressor CLWP

performance was much impressive than state-of-art-lossless

compressors.

Tsung-Han et. al. (2010) put forward a VLSI-oriented Fast,

Efficient, Lossless Image Compression System (FELICS) algorithm,

which contains simplified adjusted binary code and the Golomb-Rice

code with storage-less k parameter selection. The main objective of this

was to provide the lossless compression method for high-throughput

applications. The binary code used here significantly reduces the

arithmetic operation and result in the betterment of speed of processing.

The various experiments results show that the architecture proposed

possesses superior performance in parallelism efficiency and power

efficiency when compared with various other works, which

characterizes high-speed lossless compression.

Suzuki et. al. (2010) introduced a hardware-friendly IntDCT

(discrete cosine transforms) that can be used with both lossy and lossless

coding. This IntDCT is made using direct-lifting of DCT and inverse of

Page 10: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

22

DCT. Every lifting block can be used by the existing DCT device. A

small Side Information Block (SIB) is needed by this method.

Mehboob et. al. (2010) presented a novel lossless data

compression device that extends enterprise network to branch offices by

integrating multiple communication technologies that incorporates with

Gigabit Ethernet, STM1 / STM4 / STM16 interfaces for WAN

connectivity, fiber channel interface for storage area network and 10G

Ethernet for enterprise network connectivity which implements a new

architecture which in turn implements the LZ77 lossless data

compression algorithm in hardware.

Auli-Llinas et. al. (2009) introduced a new estimator to

approximate the distortion produced by the successive coding of

transform coefficients in bit plane image coders, which have been

distribution within the quantization intervals which may be able to

approximate distortion with very high accuracy.

2.3 SURVEY ON ENCRYPTION AND ENCRYPTED

IMAGE COMPRESSION

In this Section, various image encryption algorithms,

encrypted image compression algorithms and Visual Cryptography (VC)

are discussed.

Lukac et. al. (2004) have discussed that a secret sharing

scheme suitable for encrypting colour images was introduced and the

required colour shares were obtained during encryption by operating at

Page 11: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

23

the bit-levels. Perfect reconstruction was achieved by the decryption

module using only logical operations.

Chung-Ping et. al. (2005) have invented two approaches for

integrating encryption with multimedia compression systems which

included selective encryption and modified entropy coders with multiple

statistical models. This can be examined for the limitations of selective

encryption using cryptanalysis, and provide examples that use selective

encryption successfully. These two rules determined whether the

selective encryption was suitable for a compression system and finally

concluded with the proposal of another approach that turns entropy

coders into encryption ciphers using multiple statistical models. The

specific encryption schemes obtained by applying this approach were

the Huffman coder and the QM coder. It was shown that the security has

been achieved without sacrificing the compression performance and the

computational speed. This modified entropy coding methodology can be

applied to most modern compressed audio/video such as MPEG audio,

MPEG video and JPEG/JPEG2000 images.

Martin et. al. (2009) have presented a biometric encryption

system that addressed the privacy concern in the deployment of the face

recognition technology in real-world systems. In particular, they focused

on a self-exclusion scenario (a special application of watch-list) of face

recognition and proposed a novel design of a biometric encryption

system deployed with a face recognition system under constrained

conditions. From a system perspective, they had investigated issues

ranging from image preprocessing, feature extraction to cryptography,

error-correcting coding/decoding, key binding, and bit allocation. In

Page 12: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

24

simulation studies, the proposed biometric encryption system was tested

on the CMU PIE face database. An important observation from the

simulation results was that in the proposed system, the biometric

encryption module tended to significantly reduce the false acceptance

rate with a marginal increase in the false rejection rate.

Shujun Li et. al. (2008) proposed a system which uncovers a

new image scrambling (i.e., encryption) scheme without bandwidth

expansion which is based on two-dimensional discrete prolate

spheroidal sequences. A comprehensive crypt-analysis was given on that

image scrambling scheme, showing that it is not sufficiently secure

against various crypto graphical attacks including cipher text-only

attack, known/chosen-plaintext attack, and chosen-cipher text attack.

Detailed cryptanalytic results suggested that the image scrambling

scheme could be used to realize perceptual encryption but not to provide

content protection for digital images.

InKoo Kang et. al. (2011) have designed an indigenous

approach for Visual Information Pixel (VIP) synchronization and error

diffusion enhancing the attainment of a colur visual cryptography

encryption method that produced meaningful colur shares with high

visual quality. VIP synchronization retained the positions of pixels

carrying visual information of original images throughout the colur

channels and error diffusion generates shares pleasant to human eyes.

Comparisons with previous approaches showed superior performance of

the new method.

Rajendra Acharya et. al. (2001) had used the methods of

Digital Watermarking for interleaving patient information with medical

Page 13: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

25

images to reduce storage and transmission overheads. The text data were

encrypted before interleaving with images to ensure greater security.

The graphical signals were compressed and subsequently interleaved

with the image. Differential pulse-code-modulation and adaptive-delta-

modulation techniques were employed for data compression, and

encryption and results were tabulated for a specific example.

Bourbakis et. al. (2003) have invented a SCAN-based method

for image and video compression-encryption-hiding with application to

digital video on demand. The software SCAN implementation running

on a Pentium IV took about 1 second for 25 video frames. As an

alternative solution, however, they developed a FPGA-based

architecture, which operated in real time. Bowley et. al. (2011) have

discussed a property of sparse representations in relation to their

capacity for information storage. It was then shown that that feature

could be used for an application that was termed Encrypted Image

Folding. The proposed procedure was realizable through any suitable

transformation. In particular, this paper has illustrated the approach by

recourse to the Discrete Cosine Transform and a combination of

redundant Cosine and Dirac dictionaries.

Schonberg et. al. (2008) presented a framework for

compressing encrypted media, such as images and videos. Their

algorithm was plain; Encryption masked the source, rendering

traditional compression algorithms ineffective. By conceiving of the

problem as one of distributed source coding, they showed in prior work

that encrypted data are as compressible as unencrypted data. However,

there were two major challenges to realize those theoretical results. The

Page 14: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

26

first was the development of models that capture the underlying

statistical structure and were compatible with our framework. The

second is that, since the source was masked by encryption, the

compressor does not know what rate to target. These issues were tackled

by them, they first developed statistical models for images before

extending it to videos, where their techniques really gained traction.

Next, they developed and presented an adaptive protocol for universal

compression and showed that it converges to the entropy rate. They

demonstrated a complete implementation for encrypted video.

Shortt et. al. (2006) attempted a novel method for

compression and encryption of three-dimensional (3D) objects with the

combination of massive parallelism and flexibility offered by digital

electronics. The encrypted real-world 3D objects were captured using

phase-shift interferometry, by combining a phase mask and Fresnel

propagation. Compression was achieved by non- uniformly quantizing

the complex-valued encrypted digital holograms using an artificial

neural network. Decryption was performed by displaying the encrypted

hologram and phase mask in an identical configuration.

Cheng et. al. (2000) studied the prevailing systems of secure

transmission and storage for multimedia systems and images, thus

proposed a partial encryption of data that combines compression and

encryption. Partial encryption was applied to several image and video

compression algorithms in this paper and less than 2% was encrypted

for 512×512 images compressed by the SPIHT algorithm. The results

were similar for video compression, resulting in a significant reduction

in encryption and decryption time. The proposed partial encryption

Page 15: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

27

schemes were fast, secure, and did not reduce the compression

performance of the underlying compression algorithm.

Sudharsanan (2005) propounded that, even though several

methods have been proposed for encrypting images by shared key

encryption mechanisms, all those were applicable primarily for non-

compressed images in either monochrome or color domains. They

proposed a shared key algorithm that worked directly in the JPEG

domain, thus enabling shared key image encryption for a variety of

applications. The scheme directly worked on the quantized DCT

coefficients and the resulting noise-like shares were also stored in the

JPEG format. The decryption process was lossless preserving the

original JPEG data. The experiments indicated that each share image

was approximately the same size as the original JPEG image retaining

the storage advantage provided by JPEG compression standard. Three

extensions, one to improve the random appearance of the generated

shares, another to obtain shares with asymmetric file sizes, and the third

to generalize the scheme for n>2 share cases, were described as well.

Servetti et. al. (2002) have come up with innovative concepts

for providing cryptographic security for mobile phone calls. They have

developed two partial encryption techniques namely low protection

scheme and high protection scheme. The high-protection scheme, based

on the encryption of about 45% of the bit stream, achieved content

protection comparable to that obtained by full encryption, For the low-

protection scheme, encryption of as little as 30% of the bit stream

virtually eliminated intelligibility as well as most of the remaining

perceptual information.

Page 16: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

28

Qiu-Hua and Fu-Liang (2002) explored the concept of Blind

Source Separation (BSS) to add another encryption level besides the

existing encryption methods for image cryptosystems. The transmitted

images were covered with a noise image by specific mixing before

encryption and then recovered through BSS after decryption.

Al Jabri and Al-Asmari (1996) have operated methods that

allowed high encryption and decryption rates, simple key management

and utilization of widely available encryption algorithms such as the

Data Encryption Standard (DES). Effects of channel noise on the

encrypted data were also considered and a modification of conventional

methods to combat channel errors was also proposed and evaluated.

Zhou et. al. (2001) presented a method, authenticity and

integrity of digital mammography, which had the capacity of meeting

the requirements of authenticity and integrity for Mammography Image

(IM) transmission. The Authenticity and Integrity for Mammography

(AIDM) consisted of the following four modules. (1) Image

preprocessing for segmentation of breast pixels from background and

extract patient information from Digital Imaging and Communication in

Medicine (DICOM) Image Header. (2) Image hashing for computation

of an image hash value of the mammogram using the MD5 hash

algorithm. (3) Data encryption for production of digital envelope

containing the encrypted image hash value (digital signature) and

corresponding patient information. (4) Data embedding: To embed the

digital envelope into the image. This is done by replacing the least

significant bit of a random pixel of the mammogram by one bit of the

digital envelope bit stream and repeating for all bits in the bit stream.

Page 17: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

29

They demonstrated that AIDM is an effective method for image

authenticity and integrity in tele-mammography application.

Monga et. al. (2007) have backed up the use of Non-Negative

Matrix Factorization (NMF) for image hashing. The authors work was

motivated by the fact that standard-rank reduction techniques, such as

QR and singular value decomposition, produced low-rank bases that did

not respect the structure (i.e., non-negativity for images) of the original

data. Receiver operating characteristics analysis over a large image

database revealed that the proposed algorithms significantly

outperformed existing approaches for image hashing.

O'Gorman et. al. (1998) presented an approach to authenticate

photo-ID documents that relied on pattern recognition and public-key

cryptography and had security advantages over physical mechanisms

that currently safeguard cards. The pattern-recognition component of

that approach was based on a photo signature which was a concise

representation of the photo image on the document. That photo signature

was stored in a database for remote authentication or in encrypted form

on the card for stand-alone authentication. They have described a

method and presented results of testing a large database of images for

photo-signature match in the presence of noise.

Qamra et. al. (2005) discussed two important aspects of such a

replica detection system: distance functions for similarity measurement

and scalability. Experimental evaluations showed superior performance

compared to DPF and other distance functions. The authors addressed

the issue of using these perceptual distance functions to efficiently

detect replicas in large image data sets. The problem of indexing was

Page 18: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

30

made challenging by the high-dimensionality and the nature of non

metricity found in the distance functions for solving this, they proposed

using Locality Sensitive Hashing (LSH) for indexing images and

through this there was a demonstration of good performance through

empirical studies even on a very large database of diverse images.

Dang and Chau (2000) presented a novel scheme, combining

the Discrete Wavelet Transform (DWT) for image compression and

block cipher Data Encryption Standard (DES) for image encryption. The

simulation results indicated that the proposed method enhanced the

security for image transmission online and improved the transmission

rate.

Bartolini et. al. (2001) published a novel algorithm suitable

for VS authentication. They applied it and discussed the results

obtained. Ran-Zan et. al. (2000) published a data hiding technique for

the storage and transmission of important data. It embedded the

important data in the moderately-significant-bit of an image, and applied

a global substitution step and a local pixel adjustment process to reduce

any image degradation. Experimental results showed that the visual

quality of the resulting image is acceptable.

Jiwu Huang et. al. (2000) discussed a new embedding strategy

for watermarking based on a quantitative analysis on the magnitudes of

DCT components of host images. They also argued that more robustness

could be achieved if watermarks were embedded in dc components since

dc components portrayed perceptual capacity of large amount than any

ac components. Establishing this, an adaptive watermarking algorithm

was presented. Supporting this idea, the lab results also proved that the

Page 19: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

31

invisible watermarks caved in with the proposed watermark algorithm

were very robust.

Gupta et. al. (2007) presented an attack graph which is a visual

aid used to document the known security risks of a particular

architecture; in short, it captures the paths attackers could use to reach

their goals. The graph's purpose was to document the risks known at the

time the system was designed, which helped architects and analysts

understand the system and find good trade-offs that mitigate these risks.

Once the risks were identified and understood in this way, the design

could be refined iteratively until the risk becomes acceptable.

Engel et. al. (2008) provided an assessment of two lightweight

encryption schemes for fingerprint images based on a bit-plane

representation of the data ,they also demonstrated a low complexity

attack against a scheme recently proposed in literature which exploits

one of several weaknesses found. A second scheme was evaluated by

them with respect to two fingerprint recognition systems and

recommendations for its safe use were given.

Lukac et. al. (2005) introduced a Color Filter Array (CFA)

image indexing approach for cost-effective consumer electronics with

image capturing capability. Using a secret sharing technique, their

proposed method indexed captured images directly in the single sensor

digital camera, mobile phone and pocket device by embedding metadata

information in the CFA domain. The metadata were used to determine

ownership, capturing device identification numbers, and to provide time

and location information. After the metadata were embedded to the CFA

image, the subsequent demosaicking step reconstructed a full color RGB

Page 20: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

32

image with excellent visual quality. The metadata information could be

extracted from the CFA images. Alternatively, it could be recovered by

the demosaicked images in personal image databases using PC software

commonly available by camera manufacturers or with conventional

public image database tools. The uniqueness and efficiency of the

proposed approach were demonstrated there by employing a common

Bayer CFA based imaging pipeline, however, the approach is suitable

for other, non-Bayer CFA patterns, as well.

Kundur et. al. (2008) discussed issues in designing secure and

private system in distributed multimedia sensor networks. They

introduced a heterogeneous lightweight sensor nets for trusted visual

computing framework specially enhanced for sensor networks have

distributed multimedia content. Protection issues within this architecture

were analyzed, leading to the development of open research problems

including secure routing in emerging free-space optical sensor networks

and distributed privacy for vision-rich sensor networking. Proposed

solutions to these problems were presented, and they also demonstrated

the necessary interaction among signal processing, networking, and

cryptography.

Yu Chen Hu (2003) provided a novel image hiding scheme

capable of hiding multiple grey-level images into another grey-level

cover image. For the reduction of the volume of secret images to be

embedded, the vector quantisation scheme was employed to encode the

secret images. The compressed messages are then encrypted by the DES

cryptosystem to ensure security. The encrypted message was hidden into

Page 21: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

33

the cover image with the help of greedy least significant bit substitution

technique.

Sobhy et. al. (2000) described an application of chaotic

algorithms in sending computer messages. The communication was

achieved through an e-mail channel even though other significant

transmission could also be used. The algorithm had a degree of security

that was magnitudes higher than systems based on physical electronic

circuit. Text, image or recorded voice message could be transmitted.

Siu-Kei et. al. (2009) lettered a novel video encryption

technique that was used to achieve partial encryption where an annoying

video could still be reconstructed even without the security key. Their

proposed scheme embeds the encryption at the transformation stage

during the encoding process. The authors developed a number of new

unitary transforms that were demonstrated to be equally efficient as the

well-known DCT. Partial encryption was achieved through alternately

applying those transforms to individual blocks according to a pre-

designed secret key. Analysis on the security level of that partial

encryption scheme was carried out against various common attacks and

some experimental results based on H.264/AVC were presented.

Ean-Wen et. al. (2007) proposed and implemented a medical

record exchange model. According to their study, Exchange Interface

Servers (EISs) were designed for hospitals to manage the information

communication through the intra and inter hospital networks linked with

a medical records database. An index service centre could be given

responsibility for managing the EIS and publishing the addresses and

public keys. The capacity of the model was estimated to process the

Page 22: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

34

medical records of about 4000 patients/h in the 1-MB network backbone

environments, which comprised about the 4% of the total outpatients in

Taiwan.

Ran Tao et. al. (2010) proposed a novel method to encrypt an

image by multi orders of FRFT. In the image encryption, the encrypted

image was obtained by the summation of different orders inverse

discrete FRFT of the interpolated sub-images. And the original image

could be perfectly recovered using the linear system constructed by the

fractional Fourier domain analysis of the interpolation. The proposed

method could be applied to two or more image encryptions. Applying

the transform orders of the utilized FRFT as secret keys, the proposed

method uses a larger key space than the existing security systems based

on the FRFT. It was verified by the experimental results that the image

decryption was highly sensitive to the deviations in the transform orders.

Jong-Yun et. al. (2000) have presented a new and simple

image encryption scheme with the combination of optical decoding

technique that was based on the principle of interference. An original

image was encoded into two phase-valued images. The interference

image between the two images produced a binary image, which had a

two-level intensity value. The performance of the proposed technique

was evaluated using computer simulations and optical experiments.

Yeo and Guo (2000) have proposed an efficient hierarchical

chaotic image encryption algorithm and its VLSI architecture. Based on

a chaotic system and a permutation scheme, all the partitions of the

original image were rearranged and the pixels in each partition were

scrambled. Its properties of high security, parallel and pipeline

Page 23: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

35

processing, and no distortion were analyzed. To implement the

algorithm, its VLSI architecture with pipeline processing, real-time

processing capability, and low hardware cost was designed and the

FPGA realization of its key modules was given. Finally, the encrypted

image was simulated and its fractal dimension was computed to

demonstrate the effectiveness of the proposed scheme.

Xinpeng (2011) proposed a novel reversible data hiding

scheme for encrypted image. He encrypted additional data into the

image by modifying a small proportion of encrypted data. Using to the

data-hiding key and with the aid of spatial correlation in natural image,

they successfully extracted the original data.

Holtz et. al. (1990) proposed concepts from artificial

intelligence and learning theory for use in a knowledge-based True

Information TV (TITV) system, in which images stored is learned prior

to transmission. In learn mode an image from a television camera was

stored into an encoding list which was then copied to a retrieval list at

the receiver. The resulting transmission bandwidth was not dependent

on screen size, resolution, or scanning rate but rather only on novelty

and movement. The moving portion of the input image was compared

with previously learned image Sections to generate super pixel codes for

transmission. A super pixel might contain any size of the image Section,

from single pixel to entire images.

Daoshun et. al. (2011) put forth a (2, n)-VSS method that

allowed a relative shift between the shares in the horizontal direction

and vertical direction. When the shares were perfectly aligned, the

contrast of the reconstructed image was equal to that of the traditional

Page 24: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

36

VSS scheme. When there was a shift, the average contrast of the

reconstructed image was higher than that of the traditional VSS scheme,

and the scheme could still work in cases where very little shape

redundancy was present in the image. The trade-off was that their

method involved a larger pixel expansion. The basic building block of

their scheme was duplication and concatenation of certain rows or

columns of the basic matrices. This seemingly simple but very powerful

construction principle could be easily used to create more general (k, n)

schemes.

Chih-Ming et. al. (2007) studied the cheating problem in VC

and extended VC. They considered the attacks of malicious adversaries

who might deviate from the scheme in any way. They also presented

three cheating methods and tested them on attacking existent VC or

extended VC schemes and improved one cheat-preventing scheme. They

proposed a generic method that converted a VCS to another VCS that

had the property of cheating prevention. The overhead of the conversion

was near optimal in both contrast digression and pixel expansion.

Zhongmin et. al. (2009) proposed a Halftone Visual

Cryptography (HVC) construction methods based on error diffusion.

The secret image was concurrently embedded into binary valued shares

while those shares were halftoned by error diffusion-the workhorse

standard of half toning algorithms. Error diffusion had low complexity

and provided halftone shares with good image quality. A reconstructed

secret image, obtained by stacking qualified shares together, does not

suffer from cross interference of share images. Factors affecting the

Page 25: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

37

share image quality and the contrast of the reconstructed image were

discussed. Simulation results showed several illustrative examples.

Feng et. al. (2011) proposed a construction of EVCS which

was realized by embedding random shares into meaningful covering

shares, they called it the embedded EVCS. Experimental results

compared some of the well-known EVCSs proposed in recent years

systematically, and showed that the proposed embedded EVCS had

competitive visual quality compared with many of the well-known

EVCSs in the literature. In addition, it had many specific advantages

against well-known EVCSs.

Ran-Zan (2009) presented a novel visual cryptography

scheme, called Region Incrementing Visual Cryptography (RIVC), for

sharing visual secrets with multiple secrecy levels in a single image. In

the proposed n-level RIVC scheme, the content of an image S was

designated to multiple regions associated with n secret levels, and

encoded to n+1 shares with the following features: (a) each share could

not obtain any of the secrets in S, (b) any t shares could be used to

reveal t-1 levels of secrets, (c) the number and locations of not-yet-

revealed secrets were unknown to users, (d) all secrets in S could be

disclosed when all of the n+1 shares were available, and (e) the secrets

were recognized by visually inspecting correctly stacked shares without

computation. The construction of the proposed n-level RIVC with least

values of n=2, 3, 4 with the help of basis matrices were introduced, and

the results from two experiments were presented.

Feng et. al. (2010) proposed a step construction for

constructing VCSOR and VCSXOR for general access structure by

Page 26: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

38

application of (2,2)-VCS recursively, wherein a participant might

receive multiple share images. Their proposed step construction

generated VCSOR and VCSXOR which had optimal pixel expansion and

contrast for each qualified set in the general access structure in most

cases. Their scheme applied a technique to simplify the access structure,

which could then reduce the Average Pixel Expansion (APE) in most

cases. They have given some experimental results and comparisons to

show the stability of the proposed scheme.

Stinson (1999) presented some background to traditional

secret-sharing schemes, and then they explained visual schemes,

describing some of the basic construction techniques used. Topics they

covered included: two out of two scheme, two out of n schemes, and

graph access structures. Liu et. al. (2011) studied and put forth a new

CIVCS that could be based on any VCS, including those with a general

access structure, and showed that their CIVCS could avoid all the above

drawbacks. Moreover, their CIVCS did not care whether the underlying

operation was OR or XOR.

2.4 LIMITATIONS BASED ON REVIEW

From the above literature review, it is found that each

compression algorithm has its own merits, even though, it has the

following set of limitations to achieve the high secure lossless

compression:

i. Execution time is high.

ii. Less compression ratio.

iii. High compression size.

Page 27: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

39

iv. Quality of the image is not up to the satisfaction.

v. Loss during the reconstruction process.

vi. Error rate is not upto the minimum.

vii. Low security.

To overcome above limitations, a combined image encryption

and compression schemes have been proposed for medical applications.

2.5 OVERVIEW OF THE PROPOSED SYSTEM

Taking into account of the limitations, taken from the literature

survey, the proposed system has been framed. In this, entire proposed

system is divided into the three parts, as follows

1. Secure medical image compression.

2. Reclaim the original medical image process

3. Checking Process

2.5.1 Secure Medical Image Compression

In this process the original grayscale medical image is taken as

an input image and it is encrypted by the Tailored Visual Cryptography

Encryption Scheme (TVCE) which is the proposed crypto system and

the output of this encryption image is compressed by various proposed

compression algorithm as follows;

1. Pixel Block Short Algorithm

2. Modified 4-bit Run Length Encoding

3. Modified 8-bit Run Length Encoding

Page 28: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

40

These compression algorithm output are given as an input for

the reclaim of the original medical image process (Figure 2.1). Apart

from those compressions two standard lossless compression schemes

were used for comparing with the proposed algorithms, the standard

algorithms were the JPEG 2000 LS and RLE. This existing algorithm

has been adopted similar to the proposed compression algorithms.

2.5.2 Reclaim the Original Medical Image Process

Here, the compressed encrypted medical images, received

from various proposed compression algorithm, were decompressed by

respective compression algorithms. Once decompression has done the

output of the decompressed images are taken as an input for the

decryption process. The decryption process is done in a separate

manner. Finally, through the proposed decryption process five gray

scale medical images are reconstructed.

2.5.3 Checking Process (Performance Evaluation)

After the decryption process, every algorithm combinations

are compared and evaluated based on the six parameters. Those

parameters are as follows

1. Size

2. Execution Time

3. Peak Signal to Noise Ratio (PSNR)

4. Compression Ratio (CR)

5. Correlation Coefficient (CC)

6. Mean Squared Error (MSE)

Page 29: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

41

Figure 2.1 Overview of the Proposed System

Grayscale medical image

Tailored Visual Cryptography Encryption

Pixel Block Short Compression

Modified 4-Bit Run length Encoding

Modified 8-Bit Run length Encoding

Pixel Block Short Decompression

Modified 4-Bit Run length Decoding

Modified 8-Bit Run length Decoding

Tailored Visual Cryptography Decryption

RM RM RM RM-Reconstructed

Medical image

CIA Checking Process

Page 30: CHAPTER 2 RELATED WORK AND ANALYSIS ON SECURED COMPRESSIONshodhganga.inflibnet.ac.in/bitstream/10603/24775/7/07_chapter 2.pdf · Partitioning In Hierarchal Trees (SPIHT) algorithm

42

To check CIA (Confidentiality, Integrity and Availability) properties,

CC and MSE are measured.

2.6 SUMMARY

The survey on encryption and compression for the medical

image has been done with the citation of nearly 90 journals related to the

field. The limitations of secure compression system were also derived

from it. In the Section 2.5, overview of the proposed system is explained

and the forthcoming chapters discuss in detail the proposed system in

various dimensions.