optical watermarking literature survey
DESCRIPTION
Optical watermarking technology literature survey, with description on DCT, WHT, Haar DWT, to extract watermarked information that is embedded...TRANSCRIPT
1
1. INTRODUCTION
The protection of copyrights of digital-image content has become more important
because increasingly more digital image content is being distributed throughout the Internet and
it can be copied exactly the same as that of the original because it is digital. Digital watermarking
is an effective way of protecting copyrights from being illegally copied. Various techniques of
digital watermarking for digital images have been developed. Digital watermarking has also been
recently used in printed images, where digital watermarking is embedded in the digital data
before it is printed. This is to prevent images copied by digital cameras or scanners from being
illegally used.
However, whether digital watermarking is in the displayed image on an electronic display
or on a printed image, conventional digital watermarking rests on the premise that people who
want to protect the copyrights of their content have the original digital data because it has been
embedded by digital processing. However, there are some cases where this premise does not
apply. One such case can arise for images that have been illegally produced by people taking
photographs of real objects that are invaluable as portraits, e.g., art works at museums that have
been painted by a famous artists or faces of celebrities on a stage. The images produced by
malicious people capturing these real objects with digital cameras or other image-input devices
have been vulnerable to illegal use since they have not contained digital watermarking. So a new
technique proposed for protecting the famous paintings and sculptures in museums etc, by using
Optical Watermarking. This optical watermarking technique provides better protection of the
images or pictures.
2
2 .LITERATURE SURVEY
Before going in detail with watermarking procedure etc., let’s have a brief knowledge on image.
2.1 Converting image into digital image
Any image in the world if wants to be processed it should be converted into a digital
image; the conversion of the natural or still image to digital content is only possible with the
digital camera. Now a days digital camera or camera is a part of life which is playing a
omnipotent role in its kind, capturing each and every moment of life and storing in a micro sd
card, probable trending to the latest technologies based on the cameras resolution the image is
being saved in only just size of KB’s by this large number of pictures are stored in the sd card.
Making camera more reliable and sophisticated, let’s see the basic structure of digital camera.
Fig 2.1. Converting Image into Digital Image
3
Representation of Digital Images:
An image may be defined as a two-dimensional function of f (x, y), where x and y are
spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the
intensity or gray level of the image at that point. When x, y, and the intensity values of f are all
finite, discrete quantities, we call the image a digital image. The field of digital image processing
refers to processing digital images by means of a digital computer. Note that a digital image is
composed of a finite number of elements, each of which has a particular location and value.
These elements are called picture elements, image elements and pixels. Pixel is the term used
most widely to denote the elements of a digital image
Types of Digital Images:
For photographic purposes, there are two important types of digital images-color and
black and white. Color images are made up of colored pixels while black and white images are
made of pixels in different shades of gray.
Black and White Images
A black and white image is made up of pixels each of which holds a single number
corresponding to the gray level of the image at a particular location. These gray levels span the
full range from black to white in a series of very fine steps, normally 256 different grays. Since
the eye can barely distinguish about 200 different gray levels, Assuming 256 gray levels, each
black and white pixel can be stored in a single byte (8 bits) of memory.
Color Images
A color image is made up of pixels each of which holds three numbers corresponding to
the red, green, and blue levels of the image at a particular location. Red, green, and blue
(sometimes referred to as RGB) are the primary colors for mixing light—these so-called additive
primary colors are different from the subtractive primary colors used for mixing paints (cyan,
magenta, and yellow). Any color can be created by mixing the correct amounts of red, green, and
blue light. Assuming 256 levels for each primary, each color pixel can be stored in three bytes
4
(24 bits) of memory. This corresponds to roughly 16.7 million different possible colors. Note
that for images of the same size, a black and white version will use three times less memory than
a color version.
2.2 Image Sampling and Quantization
From the discussion in the preceding section, we see that there are numerous ways to
acquire images, but our objective in all is the same: to generate digital images from sensed data.
The output of most sensors is a continuous voltage waveform whose amplitude and spatial
behavior are related to the physical phenomenon being sensed. To create a digital image, we
need to convert the continuous sensed data into digital form. This involves two processes:
sampling and quantization.
Basic Concepts in Sampling and Quantization
The basic idea behind sampling and quantization is illustrated in Fig. Below which shows
a continuous image f that we want to convert to digital form. An image may be continuous with
respect to the x- and y-coordinates, and also in amplitude. To convert it to digital form, we have
to sample the function in both coordinates and in amplitude. Digitizing the coordinate values is
called sampling. Digitizing the amplitude values is called quantization.
The one-dimensional function in Fig. 2.2.(b) is a plot of amplitude (intensity level) values
of the continuous image along the line segment AB in Fig. 2.2.(a). The random variations are due
to image noise. To sample this function, we take equally spaced samples along line AB, as shown
in Fig. 2.2.(c).The spatial location of each sample is indicated by a vertical tick mark in the
bottom part of the figure. The samples are shown as small white squares superimposed on the
function. The set of these discrete locations gives the sampled function. However, the values of
the samples still span (vertically) a continuous range of intensity values. In order to form a
digital function, the intensity values also must be converted (quantized) into discrete quantities.
The right side of Fig. 2.2.(c) shows the intensity scale divided into eight discrete intervals,
ranging from black to white. The vertical tick marks indicate the specific value assigned to each
of the eight intensity intervals. The continuous intensity levels are quantized by assigning one of
5
the eight values to each sample. The assignment is made depending on the vertical proximity of a
sample to a vertical tick mark. The digital samples resulting from both sampling and quantization
are shown in Fig. 2.2.(d). Starting at the top of the image and carrying out this procedure line by
line produces a two-dimensional digital image. It is implied in Fig. 2.2. that, in addition to the
number of discrete levels used, the accuracy achieved in quantization is highly dependent on the
noise content of the sampled signal. Sampling in the manner just described assumes that we have
a continuous image in both coordinate directions as well as in amplitude.
Fig 2.2. Generating a digital image.(a) Continuous image. (b) A scan line from A to B in the continuous image,
used to illustrate the concepts of sampling and quantization. (c) Sampling and quantization.(d) Digital scan line.
6
Fig 2.3. (a) Continuous image projected onto a sensor array. (b) Result of image
sampling and quantization.
2.3 Watermarking
What is Watermarking?
A Watermark is a recognizable image or pattern which appears on fine paper or some
documents to prevent counterfeiting. It is a visible embedded overlay on a digital photo
consisting of text or copy right information. It is prominently used for tracking copyright
infringements and for backbone authentication.
Classification of Watermark Algorithms
In this section we discuss different classification of watermarking algorithm Firstly,
According to type of document, watermarking technique can be divided into four groups:
a) Text watermarking
b) Image watermarking
c) Audio watermarking
d) Video watermarking
7
Secondly based on the human perception, watermark algorithms are divided into two
categories as shown below.
Visible Watermarking:
Visible watermarking are easily perception by the human eye, means the visible
watermark can be seen without the extraction process. For example it can be name or logo of the
company.
Invisible Watermarking:
In this watermarking mark cannot be seen by human eye. It is embedded in the data
without affecting the content and can be extracted by the owner only.
Robust Watermark:
A digital watermark is called robust if it resists a designated class of transformations. Robust
watermarks may be used in copy protection applications to carry copy and no access control
information
Fragile watermark:
A digital watermark is called fragile if it fails to be detectable after the slightest modification.
Fragile watermarks are commonly used for integrity proof.
8
2.4 Attributes of Digital Image Watermarking
The requirements for image watermarking can be treated as characteristics, properties or
attributes of image watermarking. Different applications demand different properties of
watermarking. Requirements of image watermarking vary and result in various design issues
depending on image watermarking applications and purpose [4]. These requirements need to be
taken into consideration while designing watermarking system. There are basic five requirements
as follows.
Fidelity:
Fidelity can be considered as a measure of perceptual transparency or imperceptibility of
watermark. It refers to the similarity of un-watermarked and watermarked images. This
perspective of watermarking exploits limitation of human vision. Watermarking should not
introduce visible distortions as it reduces commercial value of the watermarked image.
Robustness:
Watermarks should not be removed intentionally or unintentionally by simple image
processing operations Hence watermarks should be robust against variety of such attacks. Robust
watermarks are designed to resist normal processing. On the other hand, fragile watermarks are
designed to convey any attempt to change digital content.
Data Payload:
Data payload is also known as capacity of watermarking. It is the maximum amount of
information that can be hidden without degrading image quality. It can be evaluated by the
amount of hidden data. This property describes how much data should be embedded as a
watermark so that it can be successfully detected during extraction
Security:
Secret key has to be used for embedding and detection process in case security is a major
concern. There are three types of keys used in watermark systems: private-key, detection-key
9
and public-key. Hackers should not be able to remove watermark with anti-reverse engineering
research algorithm.
Computational Complexity:
Computational complexity indicates the amount of time watermarking algorithm takes to
encode and decode. To ensure security and validity of watermark, more computational
complexity is needed. Conversely, real-time applications necessitate both speed and efficiency.
2.5 WATERMARKING APPLICATIONS
Copyright Protection:
Watermarking can be used to protecting redistribution of copyrighted material over the
untrusted network like Internet or peer-to-peer (P2P) networks. Content aware networks (p2p)
could incorporate watermarking technologies to report or filter out copyrighted material from
such networks.
Content Archiving:
Watermarking can be used to insert digital object identifier or serial number to help
archive digital contents like images, audio or video. It can also be used for classifying and
organizing digital contents. Normally digital contents are identified by their file names; however,
this is a very fragile technique as file names can be easily changed. Hence embedding the object
identifier within the object itself reduces the possibility of tampering and hence can be
effectively used in archiving systems.
Meta-data Insertion:
Meta-data refers to the data that describes data. Images can be labeled with its content
and can be used in search engines. Audio files can carry the lyrics or the name of the singer.
Journalists could use photographs of an incident to insert the cover story of the respective news.
Medical X-rays could store patient records.
10
Broadcast Monitoring:
Broadcast Monitoring refers to the technique of cross-verifying whether the content that
was supposed to be broadcasted (on TV or Radio) has really been broadcasted or not.
Watermarking can also be used for broadcast monitoring. This has major application is
commercial advertisement broadcasting where the entity who is advertising wants to monitor
whether their advertisement was actually broadcasted at the right time and for right duration.
Tamper Detection:
Digital content can be detected for tampering by embedding fragile watermarks. If the
fragile watermark is destroyed or degraded, it indicated the presence of tampering and hence the
digital content cannot be trusted. Tamper detection is very important for some applications that
involve highly sensitive data like satellite imagery or medical imagery. Tamper detection is also
useful in court of law where digital images could be used as a forensic tool to prove whether the
image is tampered or not.
Digital Fingerprinting:
Digital Fingerprinting is a technique used to detect the owner of the digital content.
Fingerprints are unique to the owner of the digital content. Hence a single digital object can have
different fingerprints because they belong to different users.
11
2.6 Principle of Digital Watermarking
Fig 2.4. Principle of Digital Watermarking
A watermarking system is divided into two distinct steps. They are embedding and
detection. In embedding process the proposed algorithm accepts the host and the data to be
embedded, and a watermarked signal is produced. The watermarked signal is then transmitted or
stored. The obtained watermarked image is passed through a decoder in which a reverse
algorithm is applied to retrieve the watermark. The different techniques uses different ways of
embedding watermark onto the cover object. During embedding and extraction process a secret
key to prevent illegal access to watermark. For a practical and useful watermarking scheme it has
to meet the following requirements: Robustness: Robustness means a digital watermarking
scheme should be able to resist the watermark attacks or modifications like resizing, file
compression, rotation etc. made to the original file. On the other hand, several intentional or
unintentional attacks may be incurred to remove the embedded watermark. Thus, the
watermarked image has to survive the legitimate usage such as resamples, conversions, lossy
compressions and other malicious operations. A robust watermarking scheme should recognize
the retrieved watermark and the image quality should not be seriously harmed. Imperceptibility:
A visible or invisible watermark can be embedded into an image, the visible watermark is
perceptible and it is just like noise. Using a noise removal process we can remove the visible
watermark. In order to reduce this risk of cracking, most of the proposed watermarking
techniques use invisible watermarks. On the other hand, the quality of the watermarked image is
12
also very important. If in the process of embedding watermark, the quality of the watermarked
image is affected, then the watermarked image will lose its value or even draw the attention of
the attackers. Imperceptibility is a very important requirement therefore the quality between the
original image and the watermarked image should not be seriously degraded. Readily embedding
and retrieving: The watermark should be securely and easily embedded and retrieved by the
owner of the original image. Data load or capacity: Data load or capacity means the maximum
amount of data that can be embedded into the image to ensure proper retrieval of the watermark
during extraction. Blind: Some of the conventional watermarking schemes require the help of the
original image in order to retrieve the embedded watermark. But the reversible watermarking
schemes has the ability to recover the original image from the watermarked image directly. As
the retrieval process doesn’t need the original image, we reversible watermarking as blind.
Transparency: This refers to the perceptual similarity between the watermarked image and the
original image. The inserted watermark should be imperceptible. The watermark may lead to the
degradation in the quality of the digital content, but in some applications a small amount of
degradation may be accepted to get higher robustness.
Fig 2.5. A visible pattern watermarking on a image
13
3. Existing System
3.1 Optical Watermarking
Figure `1’ outlines the basic concept underlying our technology of watermarking that
uses light to embed information. An object is illuminated by light that contains invisible
information on watermarking. As the illumination itself contains the watermarking information,
the image of a photograph of an object that is illuminated by such illumination also contains
watermarking. By digitizing this photographic image of the real object, the watermarking
information in binary data can be extracted in the same way as that with the conventional
watermarking technique. To be more precise, information to be embedded is first transformed
into binary data, “1” or “0,” and it is then transformed into a pattern that differs depending on
whether it is “1” or “0.” This pattern is transformed into an optical pattern and projected onto a
real object. It is this difference in the pattern that is read out from the captured image. Some
applications that use invisible patterns utilize infrared light however, infrared light cannot be
used for our purposes because cameras usually have a filter that cuts off infrared light and the
invisible pattern is not contained in the captured image of the object although it is contained in
the optically projected image on the object. Therefore, the technique we propose uses visible
light, and the pattern is made invisible by using fine patterns or low contrast patterns both of
which are under the resolving power of the human visual system. Using this method, the pattern
can be made invisible in both an optically projected image on the object and the image of the
object captured with the camera.
The light source used in this technology projects the watermarking pattern similar to a
projector. Since the projected pattern has to be imperceptible to the human visual system, the
brightness distribution given by this light source then looks uniform to the observer over the
object, which is the same as that with the conventional illumination. The brightness of the
object’s surface is proportional to the product of the reflectance of the surface of the object and
illumination by an incident light. Therefore, when a photograph of this object is taken, the image
on the photograph contains watermarking information, even though this cannot be seen. The
14
main feature of the technology we propose is that the watermarking can be added by light.
Therefore, this technology can be applied to objects that cannot be electronically embedded with
watermarking, such as pictures painted by the artists.
Fig. 3.1 Basic concept underlying technology of watermarking that uses light to embed data.
In the base paper the authors had used frequency domain techniques to embed watermark or to
project invisible watermark onto pictures displayed at museum and celebrity pictures to protect
from illegal use. Those frequency domain techniques are DFT, DCT, WHT, DWT and Haar
Discrete wavelet transform.
Let us go through the above mentioned frequency domain techniques.
15
3.2 Techniques Used in Existing System
Discrete Cosine Transform:
The DCT is the most popular transform function used in signal processing. It transforms
a signal from spatial domain to frequency domain. Due to good performance, it has been used in
JPEG standard for image compression. It is a function represents a technique applied to image
pixels in branded. DCT techniques are more robust compared to spatial domain techniques. Such
algorithms are robust against simple image processing operations like adjustment, brightness,
blurring, contrast and low pass filtering and so on[3]. But it is difficult to implement and
computationally more expensive. The one-dimensional DCT is useful in processing one
dimensional signals such as speech waveforms. For analysis of two-dimensional (2D) signals
such as images, we need a 2D version of the DCT. The 2D DCT and 2D IDCT transforms is
given by equation 1 and 2.
Formulae of 2-D DCT:
F i , j (u , v )=C (u ) C (v )N∗N
∑x
N −1
∑y
N−1f
i , j(x , y )∗cos {(2 x+1 ) uπ /2 N }cos {(2 x+1 ) vπ /2 N }
………………… (1)
Formulae of 2-D inverse DCT:
f i , j (x , y )=∑u
N−1
∑v
N−1
C (u )C (v )F i, j (u , v )∗cos {(2 x+1 )uπ /2 N }cos {(2 y+1 ) vπ /2 N }
……………….. (2)
Where,
C (u)={ 1 ,∧u=0√ 2 ,∧u ≠ 0
16
C (v )={ 1 ,∧v=0√ 2 ,∧v ≠ 0
Walsh Hadamard Transform:
The Hadamard transform is a non-sinusoidal, orthogonal transformation that decomposes
a signal into a set of orthogonal, rectangular waveforms called Walsh functions. The
transformation has no multipliers and is real because the amplitude of Walsh (or Hadamard)
functions has only two values +1 or -1
The Hadamard matrix is a square array of plus and minus ones whose rows (and columns) are
orthogonal to one another.
Forward Walsh Hadamard transform
F i , j(u , v)= 1N∑
x
N−1
∑y
N−1
f i , j ( x , y ) wh (u , x ) wh( y , v)
When a 2D inverse WHT (i-WHT) is used, the equation is ex- pressed by
f i , j( x , y )= 1N∑
u
N −1
∑v
N −1
F (i , j ) (u , v ) wh ( x ,u ) wh(v , y)
Where wh (i , j ) denotes a component of the Walsh-Hadamard matrix
Where f i , j( x , y ) are the watermarked imager data for pixel (x,y) of block (i, j) in real space
F i , j(u , v) are the data for component (u,v) block of block (i,j) in frequency space and N is the
number of pixels in the block in x and y directions
17
Fig 3.2. Producing watermarks using DCT and WHT
Introduction to WAVELETS:
Wavelets are functions that satisfy certain mathematical requirements and are used in
representing data or other functions. The idea is not new. Approximation using superposition of
functions has existed since early 1800’s, when Joseph Fourier discovered that he could superpose
sine’s and cosines to represent other functions. However, in wavelet analysis, the scale that we
use to look at data plays a special role. Wavelet algorithms process data at different scales and
resolutions. If We look at a signal with a large “window”, we would notice gross features.
Similarly, if we look at a signal with a small ”window”, we would notice small features. The
result in wavelet analysis is to see both the forest and the trees
.
Discrete Wavelet Transform:
Wavelet Transform is a modern technique frequently used in digital image processing,
compression, watermarking etc. The transforms are based on small waves, called wavelet, of
varying frequency and limited duration. A wavelet series is a representation of a square-
integrable function by a certain orthonormal series generated by a wavelet. Furthermore, the
18
properties of wavelet could decompose original signal into wavelet transform coefficients which
contains the position information. The original signal can be completely reconstructed by
performing Inverse Wavelet Transformation on these coefficients. The basic idea of DWT in
which a one dimensional signal is divided in two parts one is high frequency part and another is
low frequency part. Then the low frequency part is split into two parts and the similar process
will continue until the desired level. The high frequency part of the signal is contained by the
edge components of the signal. In each level of the DWT (Discrete Wavelet Transform)
decomposition an image separates into four parts these are approximation image (LL) as well as
horizontal (HL), vertical (LH) and diagonal (HH) for detail components. In the DWT
decomposition input signal must be multiple of 2n. Where, n represents the number of level. To
analysis and synthesis of the original signal DWT provides the sufficient information and
requires less computation time. Watermarks are embedded in these regions that help to increase
the robustness of the watermark.
Haar Wavelet Transform:
Recently, wavelet-based watermarking schemes have begun to attract greatly increased
attention. The main reasons for inserting watermarks in the wavelet domain are that it has good
space-frequency localization, superior HVS modeling, and low computational cost. In practice,
when a watermark is to be embedded in the wavelet domain, there are many wavelet bases to
choose from. Since the different bases have different characteristics, the choice of which base to
use to embed the watermark is important and found that the Haar wavelet is suitable for
watermarking images.
Let I(x, y) denote a digital image of size 2M×2N, if not, boundary prolongation should be used to
ensure that the size of the image is divisible by 2, which is necessary for Haar wavelet transform.
The wavelet low-pass and high-pass filters are h(n) and g(n) respectively. Then the image can be
decomposed into its various resolutions based on the approximate weight (LL) and the detailed
weights of the horizontal direction (HL), vertical direction (LH), and diagonal direction (HH).
The decomposition formula is:
¿ ( i , j )=∑x, y
h ( x−2 i )h ( y−2 j) I (x , y)
19
LH (i , j )=∑x , y
h ( x−2i ) g( y−2 j) I (x , y )
HL (i , j )=∑x , y
g ( x−2 i )h ( y−2 j)I (x , y)
HH ( i , j )=∑x , y
g ( x−2 i ) g( y−2 j)I (x , y )
Fig.3.3. Two-level wavelet decomposed image.
where i, j, N∈ Z+, x, y ∈ Z, −2L+1≤x−2i≤0, −2L+1≤y−2i≤0.On this basis, similar
decomposition procedure can be implemented on LL to get the two-level wavelet transformed
image, as shown in Fig. 1, and so on. The wavelet image reconstruction is the inverse transform
of the wavelet decomposition. The formula is:
I ( x , y )=∑i , j
h ( x−2 i ) h ( y−2 j )≪(i , j )+∑i , j
h ( x−2 i ) g ( y−2 j ) LH ( i , j )+∑i , j
g ( x−2 i ) h ( y−2 j ) HL (i , j )+∑i , j
g ( x−2 i ) g ( y−2 j ) HH ( i , j )
3.3 Trouble in Present or Existing System
The above described techniques excluding Fourier transform, DWT suffer from 4
fundamental, intertwined shortcomings problems they are
20
Problem 1: Shift Variance
Problem 2: Oscillations
Problem 3: Aliasing
Problem 4: Lack of Directionality
Problem 1: Shift Variance:
A small shift of the signal greatly perturbs the wavelet coefficient oscillation pattern
around singularities Shift variance also complicates wavelet-domain processing algorithms must
be made capable of coping with the wide range of possible wavelet coefficient patterns caused
by shifted singularities, To better understand wavelet coefficient oscillations and shift variance,
consider a piecewise smooth signal x(t− t0) like the step function
u (t )=f ( x )={0 ,∧t<01 ,∧t ≥ 0
analyzed by a wavelet basis having a sufficient number of vanishing moments[6]. Its wavelet
coefficients consist of samples of the step response of the wavelet
d ( j ,n)≈ 2−3 j
2 ∆ ∫−∞
2 j¿−n
¿φ ( t ) dt
where ∆ is the height of the jump. Since ψ(t ) is a bandpass function that oscillates
around zero, so does its step response d( j, n) as a function of n (recall Figure 1). Moreover, the
factor 2 j in the upper limit ( j≥ 0) amplifies the sensitivity of d( j, n) to the time shift t0, leading
to strong shift variance.
Problem 2: Oscillations
Since wavelets are band pass functions, the wavelet coefficients tend to oscillate positive
and negative around singularities. This considerably complicates wavelet-based processing,
21
making singularity extraction and signal modeling, in particular very challenging [22].
Moreover, since an oscillating function passes often through zero, we see that the conventional
wisdom that singularities yield large wavelet coefficients is overstated. Indeed, it is quite
possible for a wavelet overlapping a singularity to have a small or even zero wavelet coefficient.
PROBLEM 3: ALIASING
The wide spacing of the wavelet coefficient samples, or equivalently, the fact that the
wavelet coefficients are computed via iterated discrete-time down sampling operations
interspersed with non ideal low-pass and high-pass filters, results in substantial aliasing. The
inverse DWT cancels this aliasing, of course, but only if the wavelet and scaling coefficients are
not changed[6]. Any wavelet coefficient processing (thresholding, filtering, and quantization)
upsets the delicate balance between the forward and inverse transforms, leading to artifacts in the
reconstructed signal.
PROBLEM 4: LACK OF DIRECTIONALITY
Finally, while Fourier sinusoids in higher dimensions correspond to highly directional
plane waves, the standard tensor product construction of M-D wavelets produces a checkerboard
pattern that is simultaneously oriented along several directions. This lack of directional
selectivity greatly complicates modeling and processing of geometric image features like ridges
and edges.
22
4. PROPOSED SYSTEM
4.1 Introduction
The aim of the project is to find the better accuracy results of the embedded watermark
information on any image at watermarking extraction module.
We know present the whole world runs on computer via internet with trending to latest
technologies making communication of data very easy and the data may be an audio, text, video
or image, at the same time disturbances or attacks on data is quite general, but those attacks or
disturbances should not reduce the performance of the communication system or data transmitted
via the internet so there are so many generic schemes were introduced by various people to
protect the data from attacks or disturbances from modifying the original data, day to day data is
23
transmitted more precisely or securely via internet at the same time the attack of disturbance is
also severe so the shifting to most prominent technique is very good.
The most promising technique to protect data from being illegally modified is
watermarking technique, watermarking technique aroused from steganography but the
disadvantage of steganography is the hidden information or data cannot be recovered after
manipulation, hence digital watermarking plays a confidential role in embedding the
watermarked information in the data and recovering it after manipulation. As described in
literature survey the classification of watermarking, digital watermarking can be done in
frequency domain techniques that are explained above. The frequency domain techniques are
DCT, WHT, Haar DWT etc. and these techniques suffer from 4 fundamental, intertwined
shortcomings as explained above.
Fortunately, there is a simple solution to these four DWT short comings. Hence a new
scheme or technique based on wavelet transform is proposed for embedding information into the
image using complex wavelets the new technique is DUAL TREE COMPLEX WAVELET
TRANSFORM. This technique is applied to the same existing optical watermarking technique
for a set of images and comparing the result with the previous techniques
The main feature of the technology we propose is that the watermarking can be added by
light. Therefore, this technology can be applied to objects that cannot be electronically embedded
with watermarking, such as pictures painted by the artists.
24
4.2 BLOCK DIAGRAM
Vary for diff HC Values
Watermarked Images
Painting taken with camera
Painting/ Human Face
Project Watermark
Pattern
25
Fig 4.1. Block diagram of the proposed system.
Project Watermark pattern:
This is the first stage of our experiment, we need to choose on watermark pattern (either a
logo, or any information) etc which is to be projected on to the image of any real type or any
museum paintings, archeological monuments etc. After choosing the pattern, it should be
projected on the selected image using a projector. The light source used in this technology
projects the watermarking pattern similar to a projector. Since the projected pattern has to be
imperceptible to the human visual system, the brightness distribution given by this light source
then looks uniform to the observer over the object, which is the same as that with the
conventional illumination. The brightness of the object’s surface is proportional to the product of
the reflectance of the surface of the object and illumination by an incident light. Therefore, when
a photograph of this object is taken, the image on the photograph contains watermarking
information, even though this cannot be seen
Painting/Human Face:
It is the subject of our experiment to be conducted we know that to apply watermark we
need any object here we are considering a real painting or human face etc. which is projected
with the required or considered pattern using a projector.
Painting taken with camera:
Extract Watermarking Inverse
transform
Calculate Accuracy Ratio
26
Here the projected image with the watermarked pattern is taken by a digital camera to be
processed for further extraction stage. And the output of the camera is a digital image with
watermark embedded using light.
Vary for Different HC Values:
The watermarked area is divided into units of pixel blocks, and each block has a DC
component that gives an average brightness for the entire watermarked area, i.e., brightness of
illumination. Every block also has the highest frequency component (HC) in both the x and y
directions to express the 1-b binary information for watermarking. We used the phase of HC to
express binary data i.e., “0” or “1.”
Transform Techniques:
Here we apply transform techniques like DCT, DWT , DUAL TREE COMPLEX
WAVELET TRANSFORM, etc., to extract the watermarking embedded.
Calculating Accuracy:
This is the last step of our experiment this stage calculates the number of watermarked
pixels detected correctly to the whole watermarked pixels. The accuracy of detection of
embedded data read out from the watermarked image we obtained was evaluated with the rate of
correctly read out data to whole embedded data in the watermarked image where blocks of “0”
and “1” were alternately positioned like those in a checkerboard pattern.
27
4.3 FLOW CHART
Start
Watermarked Image
4*4,8*8 16*16 Size
Divide image into N*N Pixel Blocks
28
5. Tools Required
For the proposed system generic tools required are matlab, and matlab coding, a still image,
projector and digital cam.
MATLAB:
The name MATLAB stands for MATrix LABoratory. MATLAB was written originally
to provide easy access to matrix software developed by the LINPACK (linear system package)
and EISPACK (Eigen system package) projects.
Apply Inverse WHT
Extract the watermark for Diff HC Values
Compare the Accuracy
Extract the watermark for
different HC Values
Extract the watermark for
Different HC Values
Apply Inverse DCT
Apply Dual Tree Complex
Conclude
End
29
MATLAB is a high-performance language for technical computing. It integrates
computation, visualization, and programming environment. Furthermore, MATLAB is a modern
programming language environment it has sophisticated data structures, contains built-in editing
and debugging tools, and supports object-oriented programming. These factors make MATLAB
an excellent tool for teaching and research.
MATLAB has many advantages compared to conventional computer languages (e.g.,C,
FORTRAN) for solving technical problems. MATLAB is an interactive system whose basic data
element is an array that does not require dimensioning. The software package has been
commercially available since 1984 and is now considered as a standard tool at most universities
and industries worldwide. It has powerful built-in routines that enable a very wide variety of
computations. It also has easy to use graphics commands that make the visualization of results
immediately available. Specific applications are collected in packages referred to as toolbox.
There are toolboxes for signal processing, symbolic computation, control theory, simulation,
optimization, and several other fields of applied science and engineering. In industry MATLAB
is the tool of choice for high productivity research, development and analysis
Matlab as a high-performance language for technical computing, integrating computation,
visualization, and programming in an easy- to-use environment where problems and solutions
are expressed in familiar mathematical notation. Typical uses include
Mathematics and computation
Algorithm development
Data acquisition
Modeling, simulation, and prototyping
Data analysis, exploration, and visualization
Scientific, engineering and financial graphics
Application development, including graphical user interface building.
30
CONCLUSION
We proposed an optimal condition for the size of pixel blocks of an orthogonal transform
that was used for a technique of robust optical watermarking. The experimental results proved
that it was practical and that the accuracy of detection of data embedded with optical
watermarking could be improved with more pixels in each block. They revealed that under
conditions of very weak embedded watermarking, the accuracy of detection using a block with
16 16 pixels reached 100%, except when Haar DWT was used to produce watermarked images
anda complicated structured image was used as an object image. We also clarified that
robustness against various disturbances became a trade-off in optimizing embedded
watermarking data, as the volume of information using blocks with 16 16 pixels that could be
embedded into data for the watermarked image was lower than that using blocks with 4 4 or 8 8
pixels. As a result, we concluded that the maximum volume of embedded bits per unit block size
under conditions of 100% accuracy of detection could be determined in optical water marking.
31
When Haar DWT was used, the accuracy of detection was rather inferior to that with DCT and
WHT. However, as the general features of DWT indicated that the pixel resolution in real space
and the spatial-frequency resolution in frequency space were independent, the accuracy of
detection could be improved when more pixels were used in a block of the conversion base for
DWT. We next intend to evaluate the optimal pixel size in the conversion base to obtain
sufficiently accurate detection with
DWT.
REFERENCES
1. Journal of Electronic Imaging by Komori and Uehira: Optical watermarking technology
for protecting portrait rights
2. Y. Ishikawa, K. Uehira, and K. Yanaka, “Optical watermarking technique robust to
geometrical distortion in image,” in Proc. ISSPIT2010, 2010, pp. 67–72.
3. Y. Ishikawa, K. Uehira, and K. Yanaka, “Illumination watermarking technique using
orthogonal transforms,” in Proc. IAS2009, 2009, pp. 257–260.
4. O. Matoba et al., “Optical techniques for information security,” Proc. IEEE 97(6), 1128–
1148 (2009).
5. International Journal of Advanced Computer and Mathematical Sciences ISSN 2230-
9624. Vol 3, Issue 1, 2012, pp 194-204
6. IEEE Signal Processing Magazine 1053-5888/05/$20.00©2005IEEE
32