image-doc

120
IMAGE COMPRESSION 1 1. INTRODUCTION 1.1 Project Overview Image compression is minimizing the size in bytes of a graphics file without degrading the quality of the image to an unacceptable level. The reduction in file size allows more images to be stored in a given amount of disk or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages. There are several different ways in which image files can be compressed. For Internet use, the two most common compressed graphic image formats are the JPEG format and the GIF format. The JPEG method is more often used for photographs, while other techniques for image compression include the use of fractal s and wavelet s. These methods have not gained widespread acceptance for use on the Internet as of this writing. However, both methods offer promise because they offer higher compression ratios than the JPEG or GIF

Upload: vijil-dhas

Post on 11-Aug-2015

33 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: IMAGE-DOC

IMAGE COMPRESSION 1

1. INTRODUCTION

1.1 Project Overview

Image compression is minimizing the size in bytes of a graphics file without

degrading the quality of the image to an unacceptable level. The reduction in file size

allows more images to be stored in a given amount of disk or memory space. It also

reduces the time required for images to be sent over the Internet or downloaded from

Web pages.

There are several different ways in which image files can be compressed. For

Internet use, the two most common compressed graphic image formats are the JPEG

format and the GIF format. The JPEG method is more often used for photographs, while

other techniques for image compression include the use of fractals and wavelets. These

methods have not gained widespread acceptance for use on the Internet as of this writing.

However, both methods offer promise because they offer higher compression ratios than

the JPEG or GIF methods for some types of images. Another new method that may in

time replace the GIF format is the PNG format.

Compression is two types, one is Lossy Compression and another one is Loss

Less Compression. Compression can produce an approximation of the image, in which

case it is not possible to decompress the image and retrieve the original form is called

lossy compression. Lossless and lossy compression are terms that describe whether or

not, in the compression of a file, all original data can be recovered when the file is

uncompressed.

Page 2: IMAGE-DOC

IMAGE COMPRESSION 2

With lossless compression, every single bit of data that was originally in the file

remains after the file is uncompressed. All of the information is completely restored. This

is generally the technique of choice for text or spreadsheet files, where losing words or

financial data could pose a problem; method is commonly used for line art and other

images in which geometric shapes are relatively simple.

Introduction to Image Compression

In this project we use two methods of image compression.

Huffman encoding.

Haar Discrete Wavelet Transform algorithm.

These two methods are lossless compression technique. In this project, we will

use transform coding. Transform coding is an image compression technique that first

switches to the frequency domain, then does its compressing. Here we use the Haar

Discrete Wavelet Transform. The Haar transform operates as a square matrix of length

In computer science and information theory, Huffman coding is an entropy

encoding algorithm used for lossless data compression. The term refers to the use of a

variable-length code table for encoding a source symbol (such as a character in a file)

where the variable-length code table has been derived in a particular way based on the

estimated probability of occurrence for each possible value of the source symbol.

N = some integral power of 2.

Page 3: IMAGE-DOC

IMAGE COMPRESSION 3

1.2. Definitions

Image

Images may be two-dimensional, such as a photograph, screen display, and as

well as a three dimensional, such as a statue. They may be captured by optical devices

such as cameras, mirrors, lenses, telescopes, microscopes, etc. and natural objects and

phenomena, such as the human eye or water surfaces.

Compression

Compression is the technique for reducing the storage required to save an image

or the band width required to transmit it.

Lossy Compression

Lossy Compression is a data compression method which discards (loses) some of

the data, in order to achieve its goal, with the result that decompressing the data yields

content that is different from the original, though similar enough to be useful in some

way. Lossy compression is most commonly used to compress multimedia data (audio,

video, still images), especially in applications such as streaming media and internet

telephony

Lossless Compression

Lossless data compression is a class of data compression algorithms that allows

the exact original data to be reconstructed from the compressed data. The term lossless is

in contrast to lossy data compression, which only allows an approximation of the original

data to be reconstructed in exchange for better compression rates.

Page 4: IMAGE-DOC

IMAGE COMPRESSION 4

Huffman Code

In computer science and information theory, Huffman coding is an entropy

encoding algorithm used for lossless data compression.

Entropy

Entropy is a lossless data compression scheme that is independent of the specific

characteristics of the medium. One of the main types of entropy coding creates and

assigns a unique prefix code to each unique symbol that occurs in the input.

Encoding

Encoding is an algorithm used for lossless data compression. The term refers to

the use of a variable-length code table for encoding a source symbol (such as a character

in a file) where the variable-length code table has been derived in a particular way based

on the estimated probability of occurrence for each possible value of the source symbol.

Decoding

Decoding is a process of analog-to-digital conversion and digital-to-analog

conversion. In this sense, these terms can apply to any form of data, including text,

images, audio, video, multimedia, computer programs, or signals in sensors, telemetry,

and control systems.

Wavelet Compression

Wavelet compression is a form of data compression well suited for image

compression . The goal is to store image data in as little space as possible in a file.

Wavelet compression can be either lossless or lossy.

Page 5: IMAGE-DOC

IMAGE COMPRESSION 5

Hamming

Hamming is the input of histogram bytes in the original file. Using this

information, we can build the trees which allow us to calculate the variable length code of

the compressed data. This class is simply an implementation of the algorithm we learnt in

class.

Predictive Coding

In predictive coding, information already sent or available is used to predict future

values, and the difference is coded. Since this is done in the image or spatial domain, it is

relatively simple to implement and is readily adapted to local image characteristics.

Transform Coding

Transform coding, on the other hand, first transforms the image from its spatial

domain representation to a different type of representation using some well-known

transform and then codes the transformed values (coefficients).This method provides

greater data compression compared to predictive methods, although at the expense of

greater computation.

Quantization

The Quantization function, quant, is also called in squishier. We used a number

of different bit allocation masks in order to determine which scheme is better.

Mean Square Error

Mean Square Error (MSE) of an estimator is one of many ways to quantify the

difference between an estimator and the true value of the quantity being estimated.

Page 6: IMAGE-DOC

IMAGE COMPRESSION 6

Pixel

In digital imaging, a Pixel (picture element) is a single point in a raster image.

The pixel is the smallest addressable screen element; it is the smallest unit of picture

which can be controlled. Each pixel has its own address.

Joint Photographic Experts Group

Joint Photographic Experts Group (JPEG) is a commonly used method of lossy

Compression for photographic images. The degree of compression can be adjusted,

allowing a selectable tradeoff between storage size and image quality

Peak Signal-to-Noise Ratio

The phrase peak signal-to-noise ratio (PSNR) is most commonly used as a

measure of quality of reconstruction of lossy, lossless compression.

Mean Opinion Score

The Mean Opinion Score (MOS) provides a numerical indication of the

perceived quality of received media after compression and/or transmission.

Differential Pulse Code Modulation

Differential Pulse-code modulation (PCM) is a digital representation of an

analog signal where the magnitude of the signal is sampled regularly at uniform intervals,

then quantized to a series of symbols in a numeric (usually binary) code

Page 7: IMAGE-DOC

IMAGE COMPRESSION 7

2. SYSTEM ANALYSIS

System analysis is the process of gathering and interpreting facts, diagnosing

problems and using the information to recommend improvements to the systems.

Analysis specifies what the system should do.

The first stage of the software development is the study of the system under

consideration and its requirement analysis. During these stages constant interaction with

the users of the system is necessary. System analysis is the key of life cycle. To ensure

the success of the system, careful and extensive analysis is required. Analysis is a study

of various operations performed by the system. This involves gathering information and

using structured tools for analysis.

2.1 Existing System

There are several Image Compression software is available. Existing image

compression algorithms (eg. ITU Group 4 and JBIG) offer efficient solutions to the

storage problem but do not sufficiently support other objectives such as spatial access and

fast decoding.

2.2 Drawbacks of Existing System

Quality Loss

Some problem arises while storing and transferring the image

Decompress makes slight difference to the original image.

Low compression ratio

Page 8: IMAGE-DOC

IMAGE COMPRESSION 8

2.3 Proposed System

We propose a novel method based on JBIG, in which the other objectives are also

met. The compression performance of the proposed method is only 10% worse than that

of JBIG, and at the same time, spatial access to a compressed file is achieved. The

method is also 2.5 times faster in decompression than JBIG. This speed up is comparable

to the Group 4 standard, but with better compression performance.

Merits

Simplest

This compression technique produces smoother, more satisfactory compressed

images.

Wavelet coding schemes at higher compression avoid blocking artifacts.

They are better matched to the HVS (Human Visual System) characteristics.

Compression with wavelets is scalable as the transform process can be applied to

an image as many times as wanted and hence very high compression ratios can be

achieved.

Wavelet based compression allow parametric gain control for image softening and

sharpening.

Wavelet-based coding is more robust under transmission and decoding errors, and

also facilitates progressive transmission of images.

Wavelet compression is very efficient at low bit rates.

Wavelets provide an efficient decomposition of signals prior to compression.

Page 9: IMAGE-DOC

IMAGE COMPRESSION 9

The first rule guarantees that no more than the ceil(log2(alphabet size) rightmost

bits of the code can differ from zero

The first rule also allows an efficient decoding

Both rules together allow a complete reconstruction of the code knowing only the

code lengths for each symbol.

3. SYSTEM DESIGN

The specification of a software product spell out what the product is to do. The

aim of the design phase is to determine how to make the product when satisfies all the

requirements given by requirement specification document. During the design phase, the

internal structure of the product including the algorithm, data structures, inputs and

outputs and interaction with external environment is finalized. So other words the entire

software architecture of the produced is created.

The design has to specify what each module has to perform which function and

how to do it. Major design decisions are made in this phase. The decisions have to be

documented in order to keep track of the entire design process. Unlike other engineering

design approaches, software design changes conditionally as new techniques, new

approaches evolve. In view of this software design requires a different treatment when

compared to conventional engineering design methodologies. Further software design is

an iterative process.

3.1 Input Design

Input design is the most important part of the overall system design, which requires

very careful attention, Often the collection of input data is the most expensive part of the

Page 10: IMAGE-DOC

IMAGE COMPRESSION 10

system. Many errors occur during this phase of the design. So to make the system study,

the inputs given by the user is strictly validated before making a manipulation with it

thus, by validation it is possible to

Provide an effective method of input

Achieve the highest possible level of accuracy

Ensure that input is acceptable and understood by the user staff.

Input design in mainly concentrated on estimating what the inputs are

And how they have to be arranged on the input screen, how frequently the

Ideas are to be collected. The input screens are designed in such a manner that

avoids confusion and guides the user in the correct way.

Although study has been made on the type and how the input form is to be designed.

Some inputs from the user may cause severe error and is strictly validated. This software

provides a point to its user. A very good look and feel is provide through the organized

arrangement of controls such as label box, textbox, Button, link button, object list, User

Control etc. Input screen for Image Compression system are very simple and user-

friendly User are allowed to access the software only after the user authentication

process; if irrelevant data is entered message screens are displayed.

a. Login Form

This form is used to give the username and password. After that we enter into the

next level.

b. Method Form

Page 11: IMAGE-DOC

IMAGE COMPRESSION 11

Using this form the user can choose which method is used to compress the image.

c. Huffman Encoding Form

This form is used to load the image and compress the image using Huffman

algorithm and store the compressed image in desired location.

d. Huffman Decoding Form

This form is used to decode the compressed image.

e. Haar Discrete Wavelet Transform Form

This form is used to compress the image using Haar Discrete Wavelet Transform

algorithm.

3.2 Output Design

Output design generally refers to the results generated by the system. For many

end-users, output is the main reason for developing the system and the basis on which

they evaluate the usefulness of the application. The objective of a system finds its shape

in terms of the output. The Analysis of the objective of a system leads to the

determination of outputs.The Huffman Encoding form is used to view the original size of

the image, size of the compressed image, Byte difference etc. The Haar Discrete Wavelet

Algorithm form is used to view the size of the original image and compressed image.

3.3 Data Flow Diagram

Context Level DFD Huffman Method

Page 12: IMAGE-DOC

IMAGE COMPRESSION 12

User name Loading Huffman Method

Password Loading Harr Discrete Wavelet

DFD for Huffman Decode

Input image

Decoded image Decoded image

Compressed image

DFD for Encode

Input image

USER LOGIN

Haar Discrete Wavelet

ENCODE

DECODE

COMPRESSION

HAMMING

COMPRESSED IMAGE

INPUT FILE

INPUT FILE

Page 13: IMAGE-DOC

IMAGE COMPRESSION 13

Encoded image

DFD for Method 2

Input image

Compressed image

3.4 System Chart

INPUT FILE

HUFFMAN

LOGIN

HAAR WAVELET

INPUT FILE

ENCODED IMAGE

INPUT FILE

HARR DISCRETE

COMPRESSED IMAGE

Page 14: IMAGE-DOC

IMAGE COMPRESSION 14

4. PROJECT DESCRIPTION

4.1 Detail Description

Process

Image compression is minimizing the size in bytes of a graphics file without

degrading the quality of the image to an unacceptable level. The reduction in file size

allows more images to be stored in a given amount of disk or memory space. It also

reduces the time required for images to be sent over the Internet or downloaded from

Web pages.

In this project we use two methods of image compression.

Huffman encoding.

Haar Discrete Wavelet Transform algorithm these two methods are

lossless compression technique.

4.2. Huffman Encoding

In computer science and information theory, Huffman coding is an entropy

encoding algorithm used for lossless data compression. The term refers to the use of a

variable-length code table for encoding a source symbol (such as a character in a file)

DECODE

HAMMING ENCODE

COMPRESSION

COMPRESSION

Page 15: IMAGE-DOC

IMAGE COMPRESSION 15

where the variable-length code table has been derived in a particular way based on the

estimated probability of occurrence for each possible value of the source symbol. It was

developed by David A. Huffman while he was a Ph.D. student at MIT, and published in

the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes".

Huffman coding uses a specific method for choosing the representation for each

symbol, resulting in a prefix code (sometimes called "prefix-free codes") (that is, the bit

string representing some particular symbol is never a prefix of the bit string representing

any other symbol) that expresses the most common characters using shorter strings of bits

than are used for less common source symbols.

Huffman was able to design the most efficient compression method of this type:

no other mapping of individual source symbols to unique strings of bits will produce a

smaller average output size when the actual symbol frequencies agree with those used to

create the code. A method was later found to do this in linear time if input probabilities

Refers to data compression techniques in which no data is lost. The PKZIP compression

technology is an example of lossless compression. For most types of data, lossless

compression techniques can reduce the space needed by only about 50%. For greater

compression, one must use a lossy compression technique. Note, however, that only

certain types of data graphics, audio, and video can tolerate lossy compression. You must

use a lossless compression technique when compressing data and programs.

Unlike ASCII code, which is a fixed-length code using seven bits per character,

Huffman compression is a variable-length coding system that assigns smaller codes for

Page 16: IMAGE-DOC

IMAGE COMPRESSION 16

more frequently used characters and larger codes for less frequently used characters in

order to reduce the size of files being compressed and transferred.

For example, in a file with the following data:

XXXXXXYYYYZZ

The frequency of "X" is 6, the frequency of "Y" is 4, and the frequency of "Z" is 2

shown in table 4.1.1. If each character is represented using a fixed-length code of two

bits, then the number of bits required to store this file would be 24, i.e., (2 x 6) + (2x 4) +

(2x 2) = 24. If the above data were compressed using Huffman compression, the more

frequently occurring numbers would be represented by smaller bits, such as:

Table 4.1.1.Character representation using fixed length

X by the code 0 (1 bit)

Y by the code 10 (2 bits)

Z by the code 11 (2 bits)

therefore the size of the file becomes 18, i.e., (1x 6) + (2 x 4) + (2 x 2) = 18.

In the above example, more frequently occurring characters are assigned smaller

codes, resulting in a smaller number of bits in the final compressed file.

The algorithm

Page 17: IMAGE-DOC

IMAGE COMPRESSION 17

The Huffman coding is form of entropy coding which implements the coding idea

discussed above. If a random number has possible outcomes (symbols), it is

obvious that these outcomes can be coded by bits. For example, as the pixels in a

digital image can take possible gray values, we need to represent each

pixel and bits to represent an image of pixels.

By Huffman coding, however, it is possible to use on average fewer than bits

to represent each pixel. In general, Huffman coding encodes a set of symbols with

binary code of variable length, following this procedure:

Estimate the probability for each of the symbols ;

Sort these’s in descending order (top down);

Page 18: IMAGE-DOC

IMAGE COMPRESSION 18

Forward pass (left to right):- combine the two smallest at the bottom and resort their

sum with all other probabilities, repeat this step until there are only two probabilities left.

Backward pass (right to left):- add a bit (0 or 1) to the binary codes of the two

probabilities newly emerging at each step, repeat this step until all the initial symbols

are encoded. As the result of Huffman coding, all symbols are encoded optimally in the

sense that more probable symbols are encoded by shorter binary codes, so that the

average length (number of bits) of the codes for these symbols is minimized.

To illustrate how much compression the Huffman coding can achieve and how the

compression rate is related to the content of the image, consider the following examples

of compressing an image of gray levels, each with probability for a pixel

to be at the itch gray level (the histogram of the image).

Example 4.1.1:

Page 19: IMAGE-DOC

IMAGE COMPRESSION 19

i.e., all pixels take the same value. The image contains 0 bits uncertainty (no

surprise) or information and requires 0 bits to transmit.

Example 4.1.2:

Pi Code

0.5 0

0.5 1

0.0

0.0

The uncertainty is

which requires 1 bit to encode the information.

Although the uncertainty is less than half of that in the previous case, it still

requires one bit to encode the information.

This image has the maximum information (uncertainty) and therefore requires bits

to encode each of its pixels. From these examples it can be seen that with variable-length

coding the average number of bits needed to encode a pixel may be reduced from. The

amount of reduction, or the compression rate, depends on the amount of uncertainty or

information contained in the image.

Page 20: IMAGE-DOC

IMAGE COMPRESSION 20

If the image has only a single gray level, it contains 0 bits information and

requires 0 bits to encode; but if the image has equally likely gray levels, it contains

maximum amount of bits information requiring bits to encode.

Process

Decoding: - The input is an array of bytes of the original file. The operations are:

Calculating the entropy

Calculating the entropy given another array of bytes (I called it twin but it should

be a file from family)

Decoding: decode the data using Algorithm

Encoding: - The input is Decoded Info (data after compression). The operations are:

Encoding data by reconstructing the original file

Saving the result

Loading compressed file

Decoded Info: - This is the structure of the compressed file. It contains the following

fields:

decodedData: array of bytes containing the result of compression

codeMap: variable code which calculates using the Huffman's tree

totalBits: size of the result

twinFileName: the name of the file which is used to calculate the xor result (it's the

file that give the lowest entropy)

Page 21: IMAGE-DOC

IMAGE COMPRESSION 21

originalFileName: just the name of the original file I use it when saving the result

Hamming:- The input is a histogram of the bytes in the original file. Using this

information, we can build the trees which allow us to calculate the variable length code of

the compressed data. This class is simply an implementation of the algorithm we learnt in

class.

4.3. Haar Discrete Wavelet Transform algorithms

Here we use the Haar Discrete Wavelet Transform. The Haar transform operates

as a square matrix of length N = some integral power of 2.

In order to implement the image compression algorithm we chose, we divided the

process into various steps:

calculate the sums and differences of every row of the image

calculate the sums and differences of every column of the resulting matrix

repeat this process until we get down to squares of 16x16

quantize the final matrix using different bit allocation schemes

write the quantized matrix out to a binary file

The next step is quantization. This is performed during the writing to a binary file

(see proceeding discussion of writing to a file); however, we wrote a distinct quantization

function to analyze this step statistically (MSE and PSNR -- see the Results section), as

well as for our own educational benefit. The quantization function, quant, is also called in

Page 22: IMAGE-DOC

IMAGE COMPRESSION 22

squishier. We used a number of different bit allocation masks in order to determine which

scheme is better.

Wavelet-based compression

Wavelet coding schemes at higher compression avoid blocking artifacts.

They are better matched to the HVS (Human Visual System) characteristics.

Compression with wavelets is scalable as the transform process can be applied to

an image as many times as wanted and hence very high compression ratios can be

achieved.

Wavelet based compression allow parametric gain control for image softening and

sharpening.

Wavelet-based coding is more robust under transmission and decoding errors, and

also facilitates progressive transmission of images.

Wavelet compression is very efficient at low bit rates.

Wavelets provide an efficient decomposition of signals prior to compression.

Background

Before we go into details of the method, we present some background topics of

image compression which include the principles of image compression, the classification

of compression methods and the framework of a general image coder and wavelets for

image compression.

Principles of Image Compression

An ordinary characteristic of most images is that the neighboring pixels are

correlated and therefore hold redundant information. The foremost task then is to find out

Page 23: IMAGE-DOC

IMAGE COMPRESSION 23

less correlated representation of the image. Two elementary components of compression

are redundancy and irrelevancy reduction.

Human Visual System (HVS) three types of redundancy can be identified:

Spatial Redundancy or correlation between neighboring pixel values.

Spectral Redundancy or correlation between different color planes or spectral

bands.

Temporal Redundancy or correlation between adjacent frames in a sequence of

images especially in video applications.

Image compression research aims at reducing the number of bits needed to

represent an image by removing the spatial and spectral redundancies as much as

possible.

Classification of Compression Technique

There are two ways that we can consider for classifying compression techniques

lossless vs. lossy compression

Predictive vs. transform coding.

Lossless vs. Lossy compression

In lossless compression schemes, the reconstructed image, after compression, is

numerically identical to the original image. However lossless compression can only

achieve a modest amount of compression. An image reconstructed following lossy

Page 24: IMAGE-DOC

IMAGE COMPRESSION 24

compression contains degradation relative to the original. Often this is because the

compression scheme completely discards redundant information.

However, lossy schemes are capable of achieving much higher compression.

Under normal viewing conditions, no visible loss is perceived (visually lossless).

Predictive vs. Transform coding

In predictive coding, information already sent or available is used to predict future

values, and the difference is coded. Since this is done in the image or spatial domain, it is

relatively simple to implement and is readily adapted to local image characteristics.

Differential Pulse Code Modulation (DPCM) is one particular example of predictive

coding. Transform coding, on the other hand, first transforms the image from its spatial

domain representation to a different type of representation using some well-known

transform and then codes the transformed values (coefficients).This method provides

greater data compression compared to predictive methods, although at the expense of

greater computation.

Framework of General Image Compression Method

A typical lossy image compression system is consists of three closely connected

components is shown in Figure 4.3.1.

Source Encoder

Quantizer

Entropy Encode

INPUT COMPRESSED

IMAGE IMAGESOURCE ENCODER QUANTIZER

ENTROPY ENCODER

Page 25: IMAGE-DOC

IMAGE COMPRESSION 25

Figure4.3.1. Framework of General Image Compression Method

Quantization can also be applied on a group of coefficients together known as

Vector Quantization (VQ). Both uniform and non-uniform quantizes can be used

depending on the problems.

Wavelets for image compression

Wavelet transform exploits both the spatial and frequency correlation of data by

dilations (or contractions) and translations of mother wavelet on the input data. It

supports the multiresolution analysis of data i.e. it can be applied to different scales

according to the details required, which allows progressive transmission and zooming of

the image without the need of extra storage.

Another encouraging feature of wavelet transform is its symmetric nature that is

both the forward and the inverse transform has the same complexity, building fast

compression and decompression routines. Its characteristics well suited for image

compression include the ability to take into account of Human Visual System’s (HVS)

characteristics, very good energy compaction capabilities, robustness under transmission,

high compression ratio etc. The implementation of wavelet compression scheme is very

similar to that of sub band coding scheme: the signal is decomposed using filter banks.

The output of the filter banks is down-sampled, quantized, and encoded. The decoder

decodes the coded representation, up samples and recomposes the signal.

Wavelet transform divides the information of an image into approximation and

detail sub signals. The approximation sub signal shows the general trend of pixel values

and other three detail sub signals show the vertical, horizontal and diagonal details or

Page 26: IMAGE-DOC

IMAGE COMPRESSION 26

changes in the images. If these details are very small (threshold) then they can be set to

zero without significantly changing the image.

The greater the number of zeros the greater the compression ratio. If the energy

retained (amount of information retained by an image after compression and

decompression) is 100% then the compression is lossless as the image can be

reconstructed exactly.

This occurs when the threshold value is set to zero, meaning that the details have

not been changed. If any value is changed then energy will be lost and thus lossy

compression occurs. As more zeros are obtained, more energy is lost. Therefore, a

balance between the two needs to be found out.

Haar Wavelet Transform

To understand how wavelets work, let us start with a simple example. Assume we

have a 1D image with a resolution of four pixels, having values [9 7 3 5]. Haar wavelet

basis can be used to represent this image by computing a wavelet transform. To do this,

first the average the pixels together, pair wise, is calculated to get the new lower

resolution image with pixel values [8 4]. Clearly, some information is lost in this

averaging process. We need to store some detail coefficients to recover the original four

pixel values from the two averaged values. In our example, 1 is chosen for the first detail

coefficient, since the average computed is 1 less than 9 and 1 more than 7. This single

number is used to recover the first two pixels of our original four-pixel image. Similarly,

the second detail coefficient is -1, since 4 + (-1) = 3 and 4 - (-1) =5. Thus, the original

image is decomposed into a lower resolution (two-pixel) version and a pair of detail

Page 27: IMAGE-DOC

IMAGE COMPRESSION 27

coefficients repeating this process recursively on the averages gives the full

decomposition shown in Table 4.3.1.

Table 4.3.1.Averages and detail coefficients

Resolution Averages Detail Coefficients

4 [9 7 3 5]

2 [8 4] 1 -1

1 [6] 2

Thus, for the one-dimensional Haar basis, the wavelet transform of the original

four-pixel image. We call the way used to compute the wavelet transform by recursively

averaging and differencing coefficients, filter bank.

Compression of 2D image with Haar Wavelet Technique

It has been shown in previous section how one dimensional image can be treated

as sequences of coefficients. Alternatively, we can think of images as piecewise constant

functions on the half-open interval.. Let V0 be the vector space of all these functions. A

two pixel image has two constant pieces over the interval. We call the space containing

all these functions V1.

If we continue in this manner, the space Vj will include all piecewise-constant

functions defined on the interval with constant pieces over each of 2j equal subintervals.

We can now think of every one-dimensional image with 2j pixels as an element, or

vector, in Vj. Note that because these vectors are all functions defined on the unit

interval, every vector in Vj is also contained in Vj+1. For example, we can always

describe a piecewise constant function with two intervals as a piecewise-constant

Page 28: IMAGE-DOC

IMAGE COMPRESSION 28

function with four intervals, with each interval in the first function corresponding to a

pair of intervals in the second.

It guarantees that every member of V0 can be represented exactly as a member of

higher resolution space V1. The converse, however, is not true: not every function G(x)

in V1 can be represented exactly in lower resolution space V0; in general there is some

lost detail. Now we define a basis for each vector space Vj. The basic functions for the

spaces V j is called scaling functions, and are usually denoted by the symbol φ. A simple

basis for Vj is given by the set of scaled and translated box functions.

ji(x) : = (2jx – i) i = 0, 1, 2…..2j -1 where

1 for 0≤x1

(x):=

0 otherwise

The wavelets corresponding to the box basis are known as the

Haar wavelets, given by-

ij (x) : = (2jx – i) i = 0, 1, 2…..2j -1 where

1 for 0≤x1/2

(x):= -1 for 1/2≤x<1

0 otherwise

Thus, the DWT for an image as a 2D signal will be obtained from 1D DWT.

We get the scaling function and wavelet function for 2D by multiplying two 1D

functions. The scaling function is obtained by multiplying two 1D scaling functions:

Page 29: IMAGE-DOC

IMAGE COMPRESSION 29

φ(x,y)=φ(x)φ(y). The wavelet functions are obtained by multiplying two wavelet

functions or wavelet and scaling function for 1D.

Table 4.3.2.Structure of wavelet decomposition

LL HL3

HL2

HL1

LH3 HH3

LH2

HH2

LH1 HH1

It applies the 1D wavelet transform to each row of pixel values shown in Figure

4.3.3. This operation provides us an average value along with detail coefficients for each

row. Next, these transformed rows are treated as if they were themselves an image and

apply the 1D transform to each column. Now let us see how the 2D Haar wavelet

transformation is performed shown in Figure 4.3.4. The image is comprised of pixels

represented by numbers. Consider the 8×8 image taken from a specific portion of a

typical image as shown in Figure 4.3.5. Let us look how the operation is done.

Page 30: IMAGE-DOC

IMAGE COMPRESSION 30

Figure 4.3.3 1-Level Of Decomposition

Figure 4.3.4 2-Level Of Decomposition

Averaging: (64+2)/2=33, (3+61)/2=32, (60+6)/2=33, (7+57)/2=32

Differencing: 64–33 =31, 3–32= –29, 60–33=27 and 7–

32= –25

Figure 4.3.3.A 8×8 image

2D Array Representation

64 2 3 61 60 6 7 57

9 55 54 12 13 51 50 16

17 47 46 20 21 43 42 24

40 26 27 37 36 30 31 33

32 34 35 29 28 38 39 25

Page 31: IMAGE-DOC

IMAGE COMPRESSION 31

41 23 22 44 45 19 18 48

49 15 14 52 53 11 10 56

8 58 59 5 4 62 63 1

So, the transformed row becomes (33 32 33 32 31 –29 2725). Now the same

operation on the average values i.e. (33 32 33 32) is performed. Then we perform the

same operation on the averages i.e. first two elements of the new transformed 31 –29 27 –

25). The new matrix we get after applying this operation on each row of the entire matrix

performing the same operation on each column of the matrix we get the final transformed

matrix. This operation on rows followed by columns of the matrix is performed

recursively depending on the level of transformation meaning the more iteration provides

more transformations. Note that the left-top element of the i.e. 32.5 is the only averaging

element which is the overall average of all elements of the original matrix and the rest all

elements are the details coefficients.

The main part of the C program used to transform the matrix is shown in. The 2D

array mat holds the values which represent the image. The point of the wavelet transform

is that regions of little variation in the original image manifest themselves as small or

zero elements in the wavelet transformed version.

A matrix with a high proportion of zero entries is said to be sparse. For most of

the image matrices, their corresponding wavelet transformed versions are much sparser

than the originals. Very sparse matrices are easier to store and transmit than ordinary

matrices of the same size. This is because the sparse matrices can be specified in the data

file solely in terms of locations and values of their non-zero entries. It can be seen that in

the final transformed matrix, we find a lot of entries zero. From this transformed matrix,

Page 32: IMAGE-DOC

IMAGE COMPRESSION 32

the original matrix can be easily calculated just by the reverse operation of averaging and

differencing i.e. the original image can be reconstructed from the transformed image

without the loss of information.

Thus, it yields a lossless compression of the image. However, to achieve more

degree of compression, we have to think of the lossy compression. In this case, a

nonnegative threshold value say e is set.

. These thresholding methods are defined as follows:

0, if | x | < e ….. (Hard Thresholding)

T(e,x) =

x, otherwise

0,if | x |< e …. (Soft Thresholding

T(e, x) =

Sign(x)(| x |-e, otherwise

0, if x<σ (2log2N) ½ (Universal Thresholding)

T(e,x)=

x, otherwise

Where σ is the standard deviation of the wavelet coefficients and N is the number

of wavelet coefficient.

Page 33: IMAGE-DOC

IMAGE COMPRESSION 33

In summary, the main steps of the 2D image compression using Haar Wavelet as

the basis functions are

Start with the matrix P representing the original image,

Compute the transformed matrix T by the operation averaging and differencing

(First for each row, then for each column)

Choose a threshold method and apply that to find the new matrix say D

Use D to compute the compression ratio and others values and to reconstruct the

original image as well.

A database of twenty gray scale images each with size 256×256 is used in the

experiment. We define the compression ratio (CR) as the ratio of the number of nonzero

elements in original matrix to the number of nonzero elements in updated transformed

matrix. The enthusiastic CR values for different thresholding methods and different ε is

shown in Table 4.3.3.

Table 4.3.3 Thresholding methods

e CR(Hard Threshold) CR(Soft Threshold)

15 15.74 14.1

20

17.11

15.87

25

18.47 16.95

The universal thresholding method generates the CR as 14.375. It is noted

here that the hard thresholding provides the best CR. The soft thresholding gives better

CR in comparison to universal thresholding method but it depends on choosing the value

of ε.

Page 34: IMAGE-DOC

IMAGE COMPRESSION 34

Code For Transformation

/*row transformation*/

for(i=0;i<row;i++){w=col;

do{ k=0;

/*averaging*/ for(j=0;j<w/2;j++)

a[j]=((mat[i][j+j]+mat[i][j+j+1])/2);

/*differencing*/ for(j=w/2;j<w;j++,k++)

a[j]=mat[i][j-w/2+k]-a[k];

for(j=0;j<row;j++) mat[i][j]=a[j];

w=w/2;

}while(w!=1);

}

/*column transformation*/

for(i=0;i<col;i++){ w=row;

do{k=0;

/*averaging*/ for(j=0;j<w/2;j++)

a[j]=((mat[j+j][i]+mat[j+j+1][i])/2);

/*differencing*/ for(j=w/2;j<w;j++,k++)

a[j]=mat[j-w/2+k][i]-a[k];

for(j=0;j<w;j++) mat[j][i]=a[j];

w=w/2;

}while(w!=1)

Page 35: IMAGE-DOC

IMAGE COMPRESSION 35

The PSNR for gray scale image (8 bits/pixel) is defined by-

Where MSE is the Man Squared Error defined by-

where I is original image, I1 is approximation of decompressed image and m, nare

dimensions of the image.

The PSNR values for different threshold values and techniques are. The soft

thresholding method performs better than hard thresholding shown in Figure 4.3.6. The

universal method reports PSNR as 24.875. These results are very much acceptable in

most cases except in medical application where no loss of information is to be

guaranteed.

However, the PSNR is not adequate as a perceptually meaningful measure of

picture quality, because the reconstruction errors generally do not have the characteristic

of signal independent additive noise and the seriousness of the impairments cannot be

measured by a simple power measurement. Small impairment of an image can lead to a

very small PSNR in lieu of the fact that the perceived image quality can be acceptable.

So, the perceptual quality measurement method quantified by MOS and PQS has been

Page 36: IMAGE-DOC

IMAGE COMPRESSION 36

applied. The reference and test conditions are arranged in pairs such that the first is the

unimpaired reference and the second is the same sequence impaired.

Figure 4.3.6.Soft and Hard Thresholding

The original image without compression Hard Threshold oppression was used as

the reference condition. The viewers are asked to vote on the second, keeping in mind the

first. The method uses the five grade impairment scale: 5 (Excellent), 4 (Good), 3

(Slightly annoying), 2 (Annoying) and 1 (Very annoying).

At the end, the MOS is calculated as-

Where i is grade and p(i) is grade probability. PQS defined by-

Page 37: IMAGE-DOC

IMAGE COMPRESSION 37

Uses some properties of HVS relevant to global image impairments such as

random errors and emphasizes the perceptual importance of structured and localized

errors. Here, a linear combination of uncorrelated principal distortion measures Zi

combined by partial regression coefficients bi are used. PQS is constructed by regressions

with MOS. The MOS and PQS values obtained are tabulated below which are very much

encouraging.

The number of decompositions determines the quality of compressed image. The

number of decompositions also determines the resolution of the lowest level in wavelet

domain. If a larger number of decompositions are used, it will provide more success in

resolving important DWT coefficients from less important coefficients. The HVS is less

sensitive to removal of smaller details.

Table 4.3.4 Quality measurement methods.

e=15 MOS PQS MOS PQS

Hard Thresholding 4.675 4.765

Soft Thresholding 4.80 4.875

Universal

Thresholding

4.865 4.957

Page 38: IMAGE-DOC

IMAGE COMPRESSION 38

After decomposing the image and representing it with wavelet coefficients,

compression can be performed by ignoring all coefficients below some threshold.

Compression algorithm provides two modes of operation:

Compression ratio is fixed to the required level and threshold value has been

changed to achieve required compression ratio; after that, PSNR is computed.

PSNR is PSNR is fixed to the required level and threshold values have been

changed to achieve required PSNR; after that, CR is computed.

It is noted that image quality is better for a larger number of decompositions. On

the contrary, a larger number of decompositions causes the loss of the coding algorithm

efficiency. Therefore, adaptive decomposition is required to achieve balance between

image quality and computational complexity. PSNR tends to saturate for a larger number

of decompositions. For each compression ratio, the PSNR characteristic has “threshold”

which represents the optimal number of decompositions

At present, the most widely used objective distortion measures are the MSE and

the related PSNR. They can easily be computed to represent the deviation of the distorted

image from the original image in the pixel wise sense. However, in practical viewing

situations, human beings are usually not concentrated on pixel differences alone, except

for particular applications such as medical imaging, where pixel wise precision can be

very important. The subjective perceptual quality includes surface smoothness, edge

sharpness and continuity, proper background noise level, and so on. Image compression

techniques induce various types of visual artifacts that affect the human viewing

Page 39: IMAGE-DOC

IMAGE COMPRESSION 39

experience in many distinct ways, even if the MSE or PSNR level is adjusted to be about

equal.

It is generally agreed that MSE or PSNR does not correlate well with the visual

quality perceived by human beings, since MSE is computed by adding the squared

differences of individual pixels without considering the spatial interaction among

adjacent pixels. Some work tries to modify existing quantitative measures to

accommodate the factor of human visual perception.

One approach is to improve MSE by putting different weights to neighboring

regions with different distances to the focal pixel. Most approaches can be viewed as

curve-fitting methods to comply with the rating scale method. In order to obtain an

objective measure for perceived image fidelity, models of the human visual system

(HVS) should be taken into account.

It is well known that the HVS has different sensitivities to signals of different

frequencies. Since the detection mechanisms of the HVS have not localized responses in

both the space and frequency domains, the space-based MSE nor the global Fourier

analysis provides a good tool for the modeling. So, here the perceptual quality

measurement method quantified by MOS and PQS has been applied and the results are

encouraging shown in Figure 4.3.4. The fundamental difficulty in testing an image

compression system is how to decide which test images to use for evaluation. The image

content being viewed influences the perception of quality irrespective of technical

parameters of the compression system.

Page 40: IMAGE-DOC

IMAGE COMPRESSION 40

A series of pictures which are average in terms of how difficult they are for

system being evaluated has been selected. In this paper, only the gray-scale images are

considered. However, wavelet transforms and compression techniques are equally

applicable to color images with three color components.

We have to perform the wavelet transform independently on each of the three

color components of the images and have to treat the results as an array of vectored

valued wavelet co-efficient. In this case, in lieu of using the absolute value of the scalar

co-efficient, a vector-valued coefficient is to be used. Furthermore, a number of ways can

be used in which the color information can be used to obtain a wavelet transform that is

even sparser

For example, by first converting the pixel values in an image from RGB colors to

YIQ colors, we can separate the luminance information (Y) from chromatic information

(I and Q). Once the wavelet transform is computed, the compression method can be

applied to each of the components of the image separately. Since the human perception is

most sensitive to variation in Y and least sensitive in Q, the compression scheme may be

permitted to tolerate a larger error in the Q component of the compressed image, thereby

increasing the scale of compression.

Page 41: IMAGE-DOC

IMAGE COMPRESSION 41

5. IMPLEMENTATION

5.1 About the Software

The .NET Framework is a new computing platform that simplifies application

development in the highly distributed environment of the internet. The .NET Framework

is designed to fulfill the following objectives:

To provide a consistent object-oriented programming environment whether the

object code is stored and executed locally, executed locally but Internet-

distributed, or executed remotely.

To provide a code-execution environment that minimizes the software

development and versioning conflicts.

To make the developer experience consistent across widely varying types of

applications, such as Windows-based applications and Web-based applications.

The .NET Framework has two main components:

The common language runtime

The .NET Frame work class library

.NET Framework is a component of the Microsoft Windows Operating System

used to build and run windows based applications. One feature of the .NET Framework

that saves us from huge amount of tedious coding is the base class library. The base

Page 42: IMAGE-DOC

IMAGE COMPRESSION 42

framework classes cover a multitude of different functions. These functions are all

provided in a number of base classes that are grouped together under a namespace.

The .NET Framework is the infrastructure for the Microsoft .NET platform.

The .NET Framework is an environment for building, deploying, and running Web

applications and Web Services.

The Microsoft .NET Framework was developed to solve this problem.

.NET Frameworks keywords:

Easier and quicker programming

Reduced amount of code

Declarative programming model

Richer server control hierarchy with events

Larger class library

Better support for development tools

The .NET Framework consists of 3 main parts:

Programming languages:

C# (pronounced C sharp)

Visual Basic (VB.NET)

J# (pronounced j sharp)

Server technologies and client technologies:

ASP.NET (Active Server Page)

Page 43: IMAGE-DOC

IMAGE COMPRESSION 43

Windows Forms (Windows Desktop Solutions)

Compact Framework (PDA/Mobile solutions

Development environments:

Visual Studio.NET (VS.NET)

Visual Web Developer

Intermediate Language (MSIL)

The programmer can use any .NET language to write the code including Visual

Basic (VB), c#, j script etc. The result is then compiled to MSIL, the common language

of .NET.

Common Language Runtime (CLR)

The CLR is described as the “execution engine” of .Net. It’s this CLR that

manages the execution of programs. It provides the environment within which the

programs run. The software version of .Net is actually the CLR version.

Working of the CLR

When the .Net program is compiled, the output of the compiler is not an

executable file but a file that contains a special type of code called the Microsoft

Intermediate Language (MSIL). This MSIL defines a set of portable instructions that are

independent of any specific CPU. It’s the job of the CLR to translate this Intermediate

code into an executable code when the program is executed making the program to run in

any environment for which the CLR is implemented and that’s how the .Net Framework

Page 44: IMAGE-DOC

IMAGE COMPRESSION 44

achieves portability. This MSIL is turned into executable code using a JIT (Just In Time)

compiler.

Class Libraries

Class library is the second major entity of the .Net Framework. This library gives

the program access to runtime environment. The class library consists of lots of

prewritten code that all the applications created in VB.net and Visual Studio.Net will use.

Common Language specification (CLS)

If we want the code which we write in a language to be used by programs in

other languages then it should adhere to the Common Language Specification (CLS). The

CLS describes a set of features that different languages have in common.

Some reasons why developers are building applications using the .NET

Framework:

Improved Reliability

Increased Performance

Developer Productivity

Powerful Security

Integration with existing Systems

Ease of Deployment

Mobility Support

XML Web service Support

Page 45: IMAGE-DOC

IMAGE COMPRESSION 45

Support for over 20 Programming Languages

Flexible Data Access

Languages supported by .NET Framework

The table below lists all the languages supported by .NET Framework and

describes those languages:

APL

APL is one of the most powerful, consistent and concise programming languages

ever devised. It is a language for describing procedures in the processing of information.

It can be used to describe mathematical procedures having nothing to do with computers

or to describe the way a computer works.

C++

C++ is a true OOP. It is one of the early Object-Oriented programming

Languages. C++ derives from the C language.

VC++

Visual C++ is the name of a C++ compiler with an integrated environment from

Microsoft. This includes special tools that simplify the development of great applications,

as well as specific libraries. Its use is known as Visual programming.

C#

Page 46: IMAGE-DOC

IMAGE COMPRESSION 46

C# called as C Sharp is a fully fledged object-Oriented programming language

from Microsoft built into the .Net Framework. First created in the late 1990’s was part of

Microsoft’s whole .Net strategy.

COBOL

COBOL (Common Business Oriented Language) was the first widely-used high-

level programming language for business applications. It is considered as a programming

language to have more lines of code than any other language.

6. TESTING

6.1 Testing Fundamentals

There are two types of testing; the first one is unit testing where the important

classes are tested by writing a test application. Here all the methods of the classes are

tested for its defined functionality. The next level testing is called black box testing or

Page 47: IMAGE-DOC

IMAGE COMPRESSION 47

integration testing, where the software is tested as whole for its functionalities. This kind

of doesn’t need any inside knowledge of the system but the tester should know about how

to work with the software and the functionalities provided. Unit test focuses verification

effort on the simplest unit of software design of the module using the detailed writing

description as a guide, internal control path are tested to uncover errors within the

boundary of module. The unit test is always white box oriented and the step can be

conducted in parallel for multiple modules.

6.2. Type of Testing Done

Testing forms a core part of any project there are various types of testing are

there. In this system we are done following testing

White Box testing

Black box testing

Unit Testing

Integration Testing

Validation Testing

Output Testing

User Interface Testing

Data Testing

Execution Testing

6.2.1. White Box Texting

Page 48: IMAGE-DOC

IMAGE COMPRESSION 48

White-box testing, sometimes called glass-box is a test case design method that

uses the control structure of the procedural design to derive test cases.

6.2.2. Block Box Testing

Focuses on the functional requirements of the software That is, black box enables

the software engineer to derive sets of input conditions that will fully exercise all

functional requirements for the program. Black box testing is not an alternative to white

box testing. Rather it is a complementary approach that is likely to uncover a different

class of Errors than white box methods. Black Box testing attempts to find errors in the

following categories:

Incorrect or missing functions

Interface errors

6.2.3. Unit Testing

This is the first level of testing. In this different modules are tested against the

specifications produced during the design of the modules. Unit testing is done for the

verification of the code produced during the coding of single program modules is an

isolated environment. Unit testing first focuses on the modules independently of one

another to locate errors. After coding each dialog is tested and run individually. All

necessary coding was removed and ensured that all the modules worked, as the

programmer would expect. Logical errors found were corrected.

6.2.4. Integration Testing

Page 49: IMAGE-DOC

IMAGE COMPRESSION 49

Data can be lost access an interface, one module can have as adverse effort on the

other sub functions when combined may not produce the desired major functions.

Integration testing is a systematic testing for constructing the program structure, while at

the same time conducting tests to uncover error associated with the interface. The

objectives are to take a unit testing as a whole. Here the correction is difficult because

vast expenses of the entire program complicate the isolation of causes. Thus in the

integration testing step, all the errors are uncovered are corrected for the next testing

steps.

6.2.5. Validation Testing

This provides the final assurance that the software meets all functional,

behavioral and performances requirements. The software is completely assembled as a

package. Validation succeeds when the software functions in a manner in which the user

expects. Validation refers to the process of using software in alive environment in order

to find errors. During the course of validating the system, failures may occur and

sometimes the coding has to be changed according to the requirement. Thus the feedback

from the validation phase generally produces changes in the software.

Once the application was made free all of logical and interface errors, inputting

dummy data ensured that the software developed satisfied all the requirements of the

user.

6.2.6. Output Testing

Page 50: IMAGE-DOC

IMAGE COMPRESSION 50

After performing the Validation testing, the next step is output testing of the

proposed systems since no system could be useful if it does not produces the required

output generated or considered into two ways; one is no screen and another is printed

format. The output format on the screen is found to be correct as the format was designed

in the system design phase according to the user needs. For the hard copy also the output

comes out as the specified requirements by the user. Hence output testing does not result

in any correction in the system.

6.2.7. User Interface Testing

User Interface of a system is the key factor for the success of any system. The

system under consideration is tested for user acceptance by constantly keeping in touch

with the prospective system users at the time of developing and making changes

whenever required. Preparation of test data plays a vital role in the system testing. After

preparing the test data the system under study is tested using the test data.

While testing the system by using test data errors are again uncovered and the corrections

are also noted for future use.

6.2.8. Data Testing

After preparing the test data the system under study was tested using test data.

While testing the system by using test data, errors were again uncovered and corrected by

using the above testing steps. Preparation of test data plays a vital role in the system

testing. Taking various types of test data does all the above testing.

6.2.9. Execution Testing

Page 51: IMAGE-DOC

IMAGE COMPRESSION 51

Test data was prepared which were the acknowledgement details and the

information regarding the various departments in the case. An already existing file was

taken from the database and the data was fed into the new system. Various tests as

mentioned above were carried out. Initially there were bug drawbacks for the user to

complete the same process. Those bugs and drawbacks were noted down and modified

later. Again the same process was repeated three to four times. All the out puts generated

were compared with the existing file documents and the newly developed package is

running properly and efficiently.

7. USER MANUAL

Page 52: IMAGE-DOC

IMAGE COMPRESSION 52

7.1 Hardware Specification

The hardware for the system is selected considering the factors such as CUP

processing speed, memory access speed, peripheral channel speed, seek time & relational

delay of hard disk and communication speed etc.

Processor - Pentium IV

RAM - 512 MB

HDD - 8O GB

Display - Color Monitor

Keyboard

Mouse

7.2 Software Specification

Operation System -Windows XP

Front-end - ASP.Net

Code-behind - VB.Net, C#

8. FUTURE ENHANCEMENTS

Page 53: IMAGE-DOC

IMAGE COMPRESSION 53

In our project we are doesn’t use database, so we can’t perform compression over

network. Here in this project the enhancements include embedding of the security

features into this system. Now in this stage of project the security issues are addressed by

implementing the session features in the ASP.NET, but in this world the security issues

have to be addressed in the better and secure manner, like embedding the hashing

algorithms into the solution. In our Huffman compression technique we can’t able to

show the preview of selected image, and Harr Discrete method is only perform the

compression that does not perform reconstruct compressed image into its original image.

According to the needs arising in the long run time many additional options can

be added to the A STUDY ON LOSSLESS IMAGE COMPRESSION solution. This

solution can also be enhanced by adding new features with the existing ones.

9. CONCLUSION

Page 54: IMAGE-DOC

IMAGE COMPRESSION 54

This system has been developed for the given conditions and is found working

effectively. The developed system is flexible and changes, whenever needed, can be

made easily. The software is developed with a very good user interface and in a simple

manner using VB.net so that even the first time user can user can use it without any

problem.

The system is highly scalable and is well efficient to make easy interactions with

the client. The pages opened on the solution are small in size so that the end user doesn’t

have to worry about the download time and also they are fast in getting a response.

The user-friendly interface successfully overcomes strict and severe validation

checks, using the test data. The results attained were fully satisfactory from the user point

of view. An attempt was made to obtain maximum perfection in documenting the

software in a simple, precise and self-explanatory manner.

The system was verified with valid as well as invalid data in each manner. The

system is done with an insight into the necessary modifications that may require in the

future. Hence the system can be maintained successfully without much rework.

APPENDIX A

Page 55: IMAGE-DOC

IMAGE COMPRESSION 55

Screen Layouts

Login Form

Enter User Name And Password

Methods Form

Page 56: IMAGE-DOC

IMAGE COMPRESSION 56

Select Huffman

Huffman Compression Home Page

Page 57: IMAGE-DOC

IMAGE COMPRESSION 57

Adding Files

Page 58: IMAGE-DOC

IMAGE COMPRESSION 58

Decoding

Saving Outputs

Page 59: IMAGE-DOC

IMAGE COMPRESSION 59

After Hamming

After Huffman

Encode

Page 60: IMAGE-DOC

IMAGE COMPRESSION 60

Adding File

After Adding File

Page 61: IMAGE-DOC

IMAGE COMPRESSION 61

Saving Encoded File

Harr Discrete Method

Page 62: IMAGE-DOC

IMAGE COMPRESSION 62

Selecting Harr Discrete Method

Harr Discrete Home Page

Adding File

Page 63: IMAGE-DOC

IMAGE COMPRESSION 63

After Adding File

Compressing And Saving

Page 64: IMAGE-DOC

IMAGE COMPRESSION 64

After Compressing

APPENDIX B

Page 65: IMAGE-DOC

IMAGE COMPRESSION 65

Source Code

Login

using System.Drawing;

using System.Text;

using System.Windows.Forms;

namespace FamilyCompression

{

public partial class Login : Form

{

public Login()

{

InitializeComponent();

}

private void cmdlogin_Click(object sender, EventArgs e)

{

if (txtuname.Text == "admin" && txtpass.Text == "welcome")

{

frmmethod frm = new frmmethod();

frm.Show();

// this.Close();

}

}

private string ucase(string p)

{

throw new Exception("The method or operation is not implemented.");

}}}

Select Method 1 Huffman

Page 66: IMAGE-DOC

IMAGE COMPRESSION 66

using System;

using System.Collections.Generic;

using System.ComponentModel;

using System.Data;

using System.Drawing;

using System.Text;

using System.Windows.Forms;

namespace FamilyCompression

{

public partial class frmmethod : Form

{

public frmmethod()

{

InitializeComponent();

}

private void cmdlogin_Click(object sender, EventArgs e)

{

if (optmethod1.Checked == true)

{

FamilyCompression family = new FamilyCompression();

family.Show();

this.Close();

}

else

{

System.Diagnostics.Process.Start(@"C:\Users\VIJIL\Desktop\Image\ImageCompression\

ImageCompression\obj\Debug\ImageCompression.exe");

}

}

private void optmethod1_CheckedChanged(object sender, EventArgs e)

{

}

}

}

Family Compression

Page 67: IMAGE-DOC

IMAGE COMPRESSION 67

using System;

using System.Collections.Generic;

using System.ComponentModel;

using System.Data;

using System.Drawing;

using System.Text;

using System.Windows.Forms;

using System.IO;

namespace FamilyCompression

{

public partial class FamilyCompression : Form

{

#region Member Variables

private DecodedInfo hammingRes = null;

private DecodedInfo familyRes = null;

private EncodedInfo encodedInfo = null;

#endregion // Member Variables

#region Construction

public FamilyCompression()

{

InitializeComponent();

this.SetControlsState(false);

} // FamilyCompression

#endregion // Construction

#region Private Methods

private bool ReadInputFile(out byte[] inputData, string fileName)

{

inputData = null;

FileStream inputFile = null;

try

{

inputFile = new FileStream(fileName, FileMode.Open);

if (inputFile.Length == 0)

{

Page 68: IMAGE-DOC

IMAGE COMPRESSION 68

MessageBox.Show("Please don't load an empty file!", "Invalid File", MessageBoxButtons.OK,

MessageBoxIcon.Error);

inputFile.Close();

return false;

}

inputData = new byte[inputFile.Length];

inputFile.Read(inputData, 0, (int)inputFile.Length);

inputFile.Close();

}

catch (Exception e)

{

MessageBox.Show("Failed to open the input file.\n\n\n" + e, "Opening file",

MessageBoxButtons.OK, MessageBoxIcon.Error);

this.SetControlsState(false);

try

{

inputFile.Close();

}

catch { }

return false;

}

return true;

} // ReadInputFile

private void SaveOutputFile(byte[] data, string defultFileName)

{

if (data == null)

{

return;

}

SaveFileDialog dlg = new SaveFileDialog();

dlg.AddExtension = true;

dlg.Filter = "All files (*.*)|*.*";

dlg.Title = "Save Data After Hamming";

dlg.FileName = defultFileName;

if (dlg.ShowDialog() == DialogResult.OK)

{

try

Page 69: IMAGE-DOC

IMAGE COMPRESSION 69

{

FileStream fs = new FileStream(dlg.FileName, FileMode.CreateNew);

fs.Write(data, 0, data.Length);

fs.Close();

}

catch (Exception e)

{

MessageBox.Show("Failed to save the file.\n\n\n" + e, "Saving file", MessageBoxButtons.OK,

MessageBoxIcon.Error);

}

}

} // SaveOutputFile

private void SetControlsState(bool state)

{

if (!state)

{

this.ResetControls();

}

this.buttonDecode.Enabled = state;

this.buttonEncode.Enabled = state;

this.groupBoxDetails.Enabled = state;

this.labelProcedure.Enabled = state;

this.groupBoxSaveResults.Enabled = state;

this.progressBarProcedure.Enabled = state;

this.buttonSaveOriginalData.Enabled = false;

this.labelInputFileEntropy.Visible = false;

this.labelInputFileEntropy.ForeColor = Color.Black;

this.Refresh();

} // SetControlsState

private void ResetControls()

{

this.progressBarOriginalFileSize.Value = 0;

this.progressBarAfterHuffmanSize.Value = 0;

this.progressBarTheNewMethodSize.Value = 0;

this.labelOriginalFileSize.Text = "0";

this.labelAfterHuffmanSize.Text = "0";

Page 70: IMAGE-DOC

IMAGE COMPRESSION 70

this.labelTheNewMethodSize.Text = "0";

this.textBoxInputFile.Text = string.Empty;

this.buttonBrowse.Select();

} // ResetControls

private string[] OpenFile(bool multiselect)

{

OpenFileDialog dlgOpenFile = new OpenFileDialog();

dlgOpenFile.AddExtension = true;

dlgOpenFile.CheckFileExists = true;

dlgOpenFile.DefaultExt = "";

string filter = "All files (*.*)|*.*";

dlgOpenFile.Filter = filter;

dlgOpenFile.Title = "Open File";

dlgOpenFile.Multiselect = multiselect;

if (dlgOpenFile.ShowDialog() == DialogResult.OK)

{

return dlgOpenFile.FileNames;

}

return null;

} // OpenFile

#endregion // Private Methods

#region Public Methods

#endregion // Public Methods

#region Events

Adding Files

Browse button

private void buttonBrowse_Click(object sender, EventArgs e)

{

string[] fn = this.OpenFile(false);

if (fn != null)

{

this.SetControlsState(true);

this.textBoxInputFile.Text = fn[0];

}

Page 71: IMAGE-DOC

IMAGE COMPRESSION 71

} // buttonBrowse_Click

Add File

private void buttonAddFile_Click(object sender, EventArgs e)

{

string[] fn = this.OpenFile(false);

string im="";

if (fn != null)

{

this.SetControlsState(true);

for (int i = 0; i < fn.Length; i++)

{

ListViewItem itm = this.listViewFilesFamily.Items.Add(fn[i]);

textBoxInputFile.Text = fn[i];

im = textBoxInputFile.Text;

}

}

Decode

private void buttonDecode_Click(object sender, EventArgs e)

{

this.progressBarProcedure.Maximum = this.listViewFilesFamily.Items.Count;

byte[] data = null;

if (!this.ReadInputFile(out data, this.textBoxInputFile.Text))

{

return;}

this.progressBarOriginalFileSize.Maximum = data.Length;

this.progressBarOriginalFileSize.Value = data.Length;

this.labelOriginalFileSize.Text = this.progressBarOriginalFileSize.Value.ToString();

this.labelInputFileEntropy.ForeColor = Color.Black;

this.labelProcedure.Text = "Calculating Intropy..";

this.Refresh();

for (int i = 0; i < this.listViewFilesFamily.Items.Count; i++)

Page 72: IMAGE-DOC

IMAGE COMPRESSION 72

{

try

{

this.listViewFilesFamily.Items[i].SubItems[1].Text = string.Empty;

}

catch { }

}

Decoding decodingHamming = new Decoding(data);

double H = decodingHamming.CalcEntropy();

double[] Hs = new double[this.listViewFilesFamily.Items.Count];

this.labelInputFileEntropy.Text = H.ToString();

this.labelInputFileEntropy.Visible = true;

this.Refresh();

string twinFileName = string.Empty;

for (int i = 0; i < this.listViewFilesFamily.Items.Count; i++)

{

twinFileName = this.listViewFilesFamily.Columns[0].ListView.Items[i].Text;

if (!this.ReadInputFile(out data, twinFileName))

{

break;

}

Hs[i] = decodingHamming.CalcEntropyByTwin(data);

try

{

this.listViewFilesFamily.Items[i].SubItems[1].Text = Hs[i].ToString();

}

catch

{

this.listViewFilesFamily.Items[i].SubItems.Add(Hs[i].ToString());

}

this.progressBarProcedure.Value = i + 1;

this.Refresh();

}

int minIdx = 0;

for (int i = 1; i < Hs.Length; i++)

{

Page 73: IMAGE-DOC

IMAGE COMPRESSION 73

if (Hs[minIdx] > Hs[i])

{

minIdx = i;

}

}

try

{

twinFileName = this.listViewFilesFamily.Columns[0].ListView.Items[minIdx].Text;

}

catch { }

if (Hs.Length > 0 && Hs[minIdx] < H)

{

this.listViewFilesFamily.Items[minIdx].BackColor = Color.Pink;

if (!this.ReadInputFile(out data, twinFileName))

{

this.ResetControls();

return;

}

}

else

{

this.labelInputFileEntropy.ForeColor = Color.Red;

data = null;

}

this.labelProcedure.Text = "Decoding..";

this.progressBarProcedure.Value = 0;

this.Refresh();

this.hammingRes = new DecodedInfo();

decodingHamming.Decode(decodingHamming.InputData, out this.hammingRes,

this.progressBarProcedure);

string[] tempSplit = this.textBoxInputFile.Text.Split('\\');

this.hammingRes.originalFileName = tempSplit[tempSplit.Length - 1];

if (data != null)

{

this.familyRes = new DecodedInfo();

byte[] twin = (byte[])decodingHamming.InputData.Clone();

Compression.Xor(twin, data, twin);

Page 74: IMAGE-DOC

IMAGE COMPRESSION 74

Decoding decodingTwin = new Decoding(twin);

decodingTwin.Decode(twin, out this.familyRes, this.progressBarProcedure);

this.familyRes.twinFileName = twinFileName;

this.familyRes.originalFileName = tempSplit[tempSplit.Length - 1];

}

else

{

this.familyRes = null;

}

this.labelAfterHuffmanSize.Text = this.hammingRes.decodedData.Length.ToString();

this.progressBarAfterHuffmanSize.Maximum = this.progressBarOriginalFileSize.Maximum;

this.progressBarAfterHuffmanSize.Value = Math.Min(this.hammingRes.decodedData.Length,

this.progressBarAfterHuffmanSize.Maximum - 1);

this.buttonSaveDataAfterHamming.Enabled = true;

if (this.familyRes != null)

{

this.labelTheNewMethodSize.Text = this.familyRes.decodedData.Length.ToString();

this.progressBarTheNewMethodSize.Maximum = this.progressBarOriginalFileSize.Maximum;

this.progressBarTheNewMethodSize.Value = Math.Min(this.familyRes.decodedData.Length,

this.progressBarTheNewMethodSize.Maximum - 1);

this.buttonSaveDataNewMethod.Enabled = true;

}

else

{

this.labelTheNewMethodSize.Text = "0";

this.progressBarTheNewMethodSize.Value = 0;

this.buttonSaveDataNewMethod.Enabled = false;

}

this.progressBarProcedure.Value = 0;

this.labelProcedure.Text = "Ready";

double x1 = (double)Convert.ToDouble(this.labelOriginalFileSize.Text);

double x2 = (double)Convert.ToDouble(this.labelAfterHuffmanSize.Text);

double x3 = (double)Convert.ToDouble(this.labelTheNewMethodSize.Text);

double y1 = x1 - x2;

double y2 = x1 - x3;

this.textBox1.Text = y1.ToString();

Page 75: IMAGE-DOC

IMAGE COMPRESSION 75

this.textBox2.Text = y2.ToString();

} // buttonDecode_Click

Save After Hamming

private void buttonSaveDataAfterHamming_Click(object sender, EventArgs e)

{

SaveFileDialog dlg = new SaveFileDialog();

dlg.AddExtension = true;

dlg.Filter = "Compressed files (*.ham)|*.ham|All files (*.*)|*.*";

dlg.Title = "Save Data After Hamming";

if (dlg.ShowDialog() == DialogResult.OK)

{

try

{

Encoding.Save(this.hammingRes, dlg.FileName);

}

catch (Exception ex)

{

MessageBox.Show("Failed to save the file.\n\n\n" + ex, "Saving file",

MessageBoxButtons.OK, MessageBoxIcon.Error);

}

}} // buttonSaveDataAfterHamming_Click

Save After Huffman

private void buttonSaveDataNewMethod_Click(object sender, EventArgs e)

{

SaveFileDialog dlg = new SaveFileDialog();

dlg.AddExtension = true;

dlg.Filter = "Compressed files (*.ham)|*.ham|All files (*.*)|*.*";

dlg.Title = "Save Data After the new method";

if (dlg.ShowDialog() == DialogResult.OK)

{

try

{

Page 76: IMAGE-DOC

IMAGE COMPRESSION 76

Encoding.Save(this.familyRes, dlg.FileName);

}

catch (Exception ex)

{

MessageBox.Show("Failed to save the file.\n\n\n" + ex, "Saving file",

MessageBoxButtons.OK, MessageBoxIcon.Error);

}

}

} // buttonSaveDataNewMethod_Click

Clear

private void buttonClear_Click(object sender, EventArgs e)

{

this.listViewFilesFamily.Items.Clear();

this.textBoxInputFile.Clear();

this.progressBarOriginalFileSize.Value = 0;

this.progressBarAfterHuffmanSize.Value = 0;

this.progressBarTheNewMethodSize.Value = 0;

this.labelOriginalFileSize.Text = "";

this.labelAfterHuffmanSize.Text = "";

this.labelTheNewMethodSize.Text = "";

this.textBox1.Clear();

this.textBox2.Clear();

labelInputFileEntropy.Text="";

} // buttonClear_Click

Encode

private void buttonEncode_Click(object sender, EventArgs e)

{

try

{

DecodedInfo inputDecodedInfo = null;

Encoding.Load(out inputDecodedInfo, this.textBoxInputFile.Text);

this.labelProcedure.Text = "Encoding..";

EncodedInfo enc = new EncodedInfo();

Page 77: IMAGE-DOC

IMAGE COMPRESSION 77

enc.ecodedData = Encoding.Encode(inputDecodedInfo, this.progressBarProcedure);

enc.originalFileName = inputDecodedInfo.originalFileName;

this.buttonSaveOriginalData.Enabled = inputDecodedInfo != null;

this.encodedInfo = enc;

}

catch (Exception ex)

{

MessageBox.Show("Failed to encode the input file.\n\n\n" + ex, "Encoding",

MessageBoxButtons.OK, MessageBoxIcon.Error);

}this.labelProcedure.Text = "Ready..";

this.progressBarProcedure.Value = 0;

} // buttonEncode_Click

Saving After Encode

private void buttonSaveOriginalData_Click(object sender, EventArgs e)

{

try

{

if (this.encodedInfo == null)

{

throw new Exception("Bad file format.");

}

this.SaveOutputFile(this.encodedInfo.ecodedData, this.encodedInfo.originalFileName);

}

catch (Exception ex)

{

MessageBox.Show("Failed to save the file.\n\n\n" + ex, "Saving file", MessageBoxButtons.OK,

MessageBoxIcon.Error);

}

} // buttonSaveOriginalData_Click

About

private void buttonAbout_Click(object sender, EventArgs e)

{

(new About()).ShowDialog();

Page 78: IMAGE-DOC

IMAGE COMPRESSION 78

} // buttonAbout_Click

Method 2 Harr Discrete Method

Title Form

Public Class Form1

Dim i As Integer

Private Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles

Timer1.Tick

i = i + 1

If i > 25 Then

Timer1.Enabled = False

Dim x As New ImageCompression

x.Show()

Me.Hide()

End If

End Sub

Open Image

Public Class ImageCompression

Private Sub OpenImageToolStripMenuItem_Click(ByVal sender As System.Object, ByVal e As

System.EventArgs) Handles OpenImageToolStripMenuItem.Click

If ofdOriginal.ShowDialog() = Windows.Forms.DialogResult.OK Then

Try

picOriginal.Image = LoadImage(ofdOriginal.FileName)

picNew.Image = Nothing

txtCompressionLevel.Clear()

Dim file_info As New FileInfo(ofdOriginal.FileName)

txtOriginalSize.Text = FormatBytes(file_info.Length)

txtDesiredSize.Text = FormatBytes(file_info.Length * 0.5)

txtCompressedSize.Clear()

Page 79: IMAGE-DOC

IMAGE COMPRESSION 79

txtCompressionLevel.Clear()

Catch ex As Exception

MessageBox.Show("Error loading picture file " & _

ofdOriginal.FileName & vbCrLf & ex.Message, _

"File Load Error", _

MessageBoxButtons.OK, _

MessageBoxIcon.Error)

End Try

End If

End Sub

Load Image

Private Function LoadImage(ByVal file_name As String) As Bitmap

Using bm As New Bitmap(file_name)

Dim bm2 As New Bitmap(bm)

Return bm2

End Using

End Function

Compression

Private Sub HeaderDiscreateMethodToolStripMenuItem_Click(ByVal sender As System.Object, ByVal e

As System.EventArgs) Handles HeaderDiscreateMethodToolStripMenuItem.Click

Try

Dim desired_size As Double = FormattedBytesToBytes(txtDesiredSize.Text)

If desired_size < 10 Then

MessageBox.Show("Invalid desired size.", _

"Invalid Size", MessageBoxButtons.OK, _

MessageBoxIcon.Error)

txtDesiredSize.Focus()

Exit Sub

End If

If sfdJpeg.ShowDialog() = Windows.Forms.DialogResult.OK Then

picNew.Image = Nothing

Page 80: IMAGE-DOC

IMAGE COMPRESSION 80

txtCompressionLevel.Clear()

txtCompressedSize.Clear()

Dim file_name As String = sfdJpeg.FileName

Dim file_size As Long

For compression_level As Integer = 100 To 10 Step -1

Dim memory_stream As MemoryStream = SaveJpegIntoStream(picOriginal.Image,

compression_level)

file_size = memory_stream.Length

If file_size <= desired_size The

My.Computer.FileSystem.WriteAllBytes(file_name, memory_stream.ToArray(), False)

picNew.Image = LoadImage(file_name)

txtCompressionLevel.Text = compression_level

txtCompressedSize.Text = FormatBytes(file_size)

Exit For

End If

Next compression_level

End If

Catch ex As Exception

MessageBox.Show("Error saving file." & _

vbCrLf & ex.Message, "Save Error", _

MessageBoxButtons.OK, MessageBoxIcon.Error)

End Try

End Sub

JPEG Conversion

Imports System.Drawing.Imaging

Imports System.IO

Module JpegCompression

Private Function GetEncoderInfo(ByVal mimeType As String) As ImageCodecInfo

Dim HeaderDescrete As ImageCodecInfo()

HeaderDescrete = ImageCodecInfo.GetImageEncoders()

For i As Integer = 0 To HeaderDescrete.Length

If HeaderDescrete(i).MimeType = mimeType Then

Return HeaderDescrete(i)

End If

Next i

Return Nothing

Page 81: IMAGE-DOC

IMAGE COMPRESSION 81

End Function

Byte Conversion

Module ByteSizes

Private Const ONE_KB As Double = 1024

Private Const ONE_MB As Double = ONE_KB * 1024

Private Const ONE_GB As Double = ONE_MB * 1024

Private Const ONE_TB As Double = ONE_GB * 1024

Private Const ONE_PB As Double = ONE_TB * 1024

Private Const ONE_EB As Double = ONE_PB * 1024

Private Const ONE_ZB As Double = ONE_EB * 1024

Private Const ONE_YB As Double = ONE_ZB * 1024

Public Function FormatBytes(ByVal num_bytes As Double) As String

If num_bytes <= 1023 Then

Return Format$(num_bytes, "0") & " bytes"

ElseIf num_bytes <= ONE_KB * 1023 Then

Return ThreeNonZeroDigits(num_bytes / ONE_KB) & " " & "KB"

ElseIf num_bytes <= ONE_MB * 1023 Then

Return ThreeNonZeroDigits(num_bytes / ONE_MB) & " " & "MB"

ElseIf num_bytes <= ONE_GB * 1023 Then

Return ThreeNonZeroDigits(num_bytes / ONE_GB) & " " & "GB"

ElseIf num_bytes <= ONE_TB * 1023 Then

Return ThreeNonZeroDigits(num_bytes / ONE_TB) & " " & "TB"

ElseIf num_bytes <= ONE_PB * 1023 The

Return ThreeNonZeroDigits(num_bytes / ONE_PB) & " " & "PB"

ElseIf num_bytes <= ONE_EB * 1023 Then

Return ThreeNonZeroDigits(num_bytes / ONE_EB) & " " & "EB"

ElseIf num_bytes <= ONE_ZB * 1023 Then

Return ThreeNonZeroDigits(num_bytes / ONE_ZB) & " " & "ZB"

Else

Return ThreeNonZeroDigits(num_bytes / ONE_YB) & " " & "YB"

End If

End Function

Private Function ThreeNonZeroDigits(ByVal value As Double) As String

If value >= 100 Then

Return Format$(CInt(value))

Page 82: IMAGE-DOC

IMAGE COMPRESSION 82

ElseIf value >= 10 Then

Return Format$(value, "0.0")

Else

Return Format$(value, "0.00")

End If

End Function

Public Function FormattedBytesToBytes(ByVal txt As String) As Double

Dim base_value As Double = Val(txt)

Dim ending As String = txt.Trim(New Char() {"0"c, "1"c, "2"c, "3"c, "4"c, "5"c, "6"c, "7"c, "8"c,

"9"c, "."c, " "c})

ending = ending.Trim().ToLower()

Select Case ending

Case "bytes", "byte", "b"

Return base_value

Case "kb"

Return base_value * ONE_KB

Case "mb"

Return base_value * ONE_MB

Case "gb"

Return base_value * ONE_GB

Case "tb"

Return base_value * ONE_TB

Case "pb"

Return base_value * ONE_PB

Case "eb"

Return base_value * ONE_EB

Case "zb"

Return base_value * ONE_ZB

Case "yb"

Return base_value * ONE_YB

Case Else

Throw New ArgumentException("Invalid size " & txt)

End Select

End Function

End Module

Saving

Page 83: IMAGE-DOC

IMAGE COMPRESSION 83

Public Sub SaveCompressedJpeg(ByVal image As Image, ByVal file_name As String, ByVal

compression_level As Long)

If compression_level < 10 Then

Throw New ArgumentException("Compression level must be between 10 and 100")

End If

Dim encoder_params As EncoderParameters = New EncoderParameters(1)

encoder_params.Param(0) = New EncoderParameter(Encoder.Quality, compression_level)

Dim image_codec_info As ImageCodecInfo = GetEncoderInfo("image/jpeg")

Dim mem As New MemoryStream()

image.Save(mem, image_codec_info, encoder_params)

image.Save(file_name, image_codec_info, encoder_params)

End Sub

Public Function SaveJpegIntoStream(ByVal image As Image, ByVal compression_level As Long) As

MemoryStream

If compression_level < 10 Then

Throw New ArgumentException("Compression level must be between 10 and 100")

End If

Dim encoder_params As EncoderParameters = New EncoderParameters(1)

encoder_params.Param(0) = New EncoderParameter(Encoder.Quality, compression_level)

Dim image_codec_info As ImageCodecInfo = GetEncoderInfo("image/jpeg")

Dim memory_stream As New MemoryStream()

image.Save(memory_stream, image_codec_info, encoder_params)

Return memory_stream

End Function

End Module

Exit

Private Sub ExitToolStripMenuItem2_Click(ByVal sender As System.Object, ByVal e As

System.EventArgs) Handles ExitToolStripMenuItem2.Click

Application.Exit()

End Sub

Page 84: IMAGE-DOC

IMAGE COMPRESSION 84

APPENDIX C

Abbreviations

PSNR : Peak Signal-to-Noise Ratio

MSE : Mean Squared Error

MOS : Mean Opinion Score

PQS : Picture Quality Scale

DPCM : Differential Pulse Code Modulation

ASCII : American Standard Code Information Interchange

CLR : Common Language Runtime

VQ : Vector Quantization

MSIL : Microsoft Intermediate Language

VB : Visual Basic

JIT : Just In Time

CLS : Common Language Specification

COBOL : Common Business Oriented Language

CR : Compression Ratio

GIF : Graphics Interchange Format

JPEG : Joint Photographic Experts Group

PNG : Portable Network Graphics

JBIG : Joint Bi-level Image Experts Group

DFD : Data Flow Diagram

Page 85: IMAGE-DOC

IMAGE COMPRESSION 85

APPENDIX D

References

Books

1. ASP.NET The Complete Reference by Mathew MacDonald

published by Tata McGraw-Hill

2. Digital Image Processing (Second Edition) by Richard.E.Woods,

Rafael.C.Ganzalez published by Person Education

3. Image Processing (Theory,Algorithms and Architectures) by

Maher.A.SaidAhmed published by McGraw-Hill

4. System Analysis & Design(4 edition) by Lee published by O'Reily Media

5. Programming ASP.NET by Jesse Liberty published by O'Reily Media

Websites

1. www.codingproject.com

2. www.aspfree.com

3. www.w3schools.com

4. www1.fatbrain.com

5. www.huffman.com

6. www.harrdiscrete.com

Page 86: IMAGE-DOC

IMAGE COMPRESSION 86