Download - Automatic Image Inpainting
-
7/23/2019 Automatic Image Inpainting
1/39
COMPUTER ORIENTED
PROJECTOC 303
A Project Report
on
AUTOMATIC IMAGE INPAINTING
by
Akash Khaitan
08DDCS547
FST
THE ICFAI UNIVERSITY
DEHRADUN
2
ND
SEMESTER-2010-11
-
7/23/2019 Automatic Image Inpainting
2/39
CERTIFICATE
Certified that the project work entitled,AUTOMATIC IMAGE INPAINTING
has been carried out by MrAkash Khaitan, I.D. No.08DDCS547, during the II
Semester, 2010 2011. It is also certified that all the modifications suggested
have been incorporated in the report. The project report partially fulfills the
requirement in respect of Computer Oriented Project OC 303
Signature of the Instructor Signature of the Student
Date :
Place: FST, ICFAI University
Dehradun
-
7/23/2019 Automatic Image Inpainting
3/39
iii
Acknowledgement
I would like to thank my project guide Prof. Laxman Singh Sayana whose constant guidance,
suggestions and encouragement helped me throughout the work.
I would also like to thank Prof. Ranjan Mishra, Prof. Rashid Ansari and Prof. Sudeepto
Bhatacharya for there help in understanding some of the concepts
I would also like to thank my family and friends, who have been a source of encouragement and
inspiration throughout the duration of the project. I would like to thank the entire CSE family for
making my stay at ICFAI University a memorable one.
-
7/23/2019 Automatic Image Inpainting
4/39
1 12 26
2.1 2
2.2 3
2.2.1 3
2.3 4
2.4 5
2.5 5
2.5.1 56
3 8
3.1 812
3.2 () 9
3.3 () 9
3.4 9
3.5 10
3.6 ( ) 1011
3.7 12
3.8 12
4 1320
4.1 13
-
7/23/2019 Automatic Image Inpainting
5/39
4.2 15
4.3 16
4.4 18
4.5 18
4.6 18
4.7 19
4.8 / 20
5 2231
5.1 1 2223
5.2 2 2425
5.3 3 2627
5.4 4 2829
5.5 5 3031
6
7
8
-
7/23/2019 Automatic Image Inpainting
6/39
vi
Abstract
The project on Automatic Image Inpainting removes the unwanted objects from the image upon
the selection of object by the user and thus reduces the manual task. It uses the ideas of
interpolation of the pixel to be removed, by the neighborhood pixels. The entire work has been
tested under java as it provides appropriate image libraries in order to process an image
-
7/23/2019 Automatic Image Inpainting
7/39
1
1.Introduction
Image inpainting provides a means to restore damaged region of an image, such that the
image looks complete and natural after the inpainting process. Inpainting refers to the
restoration of cracks and other defects in works of art. A wide variety of materials and
techniques are used for inpainting.
Automatic/Digital inpainting are used to restore old photographs to their original condition.
The purpose of image inpainting is removal of damaged portions of scratched image, by
completing the area with surrounding (neighboring) pixel. The techniques used include the
analysis and usage of pixel properties in spatial and frequency domains.
Image inpainting techniques are also used in object removal (or image completion) in
symmetrical images.
-
7/23/2019 Automatic Image Inpainting
8/39
2
2.Image Processing Basics
In order to understand the Image inpainting clearly, one must go through this section which
includes the basic ideas of Image processing required in Image Inpainting
This chapter will describe some of the below topics in brief:
Digital Image Pixel Image Types
Point Operations Convolution operations
2.1Digital Image
The projection form is the camera is a two dimensional, time dependent continuous
distribution of light energy.
In order to convert continuous image into digital image three steps are necessary:-
The continuous light distribution must be spatially sampled The resulting function must then be sampled in time domain to create a single image The resulting must be quantized to a finite range of integers so that they are
representable within computers
Fig 2.1 a. Continuous Image
b. Discrete Image
c. Finite range of integers (pixel values)
-
7/23/2019 Automatic Image Inpainting
9/39
3
2.2 Pixel
In digital imaging, a pixel (or picture element) is a single point in a raster image. The pixel isthe smallest addressable screen element; it is the smallest unit of picture that can be
controlled. Each pixel has its own address. The address of a pixel corresponds to its
coordinates. Pixels are normally arranged in a two-dimensional grid, and are often
represented using dots or squares. Each pixel is a sample of an original image; more samples
typically provide more accurate representations of the original. The intensity of each pixel is
variable. In color image systems, a color is typically represented by three or four component
intensities such as red, green, and blue, or cyan, magenta, yellow, and black.
2.2.1 Pixel Resolution
The term resolutionis often used for a pixel count in digital imaging. When the pixel counts
are referred to as resolution, the convention is to describe the pixel resolutionwith the set of
two positive integer numbers, where the first number is the number of pixel columns (width)
and the second is the number of pixel rows (height), for example as 640 by 480. Another
popular convention is to cite resolution as the total number of pixels in the image, typically
given as number of megapixels, which can be calculated by multiplying pixel columns by
pixel rows and dividing by one million.
Below is an illustration of how the same image might appear at different pixel resolutions, if
the pixels were poorly rendered as sharp squares (normally, a smooth image reconstruction
from pixels would be preferred, but for illustration of pixels, the sharp squares make the
point better).
Fig 2.2
An image that is 2048 pixels in width and 1536 pixels in height has a total of 20481536 =
3,145,728 pixels or 3.1 megapixels. One could refer to it as 2048 by 1536 or a 3.1-megapixel
image
-
7/23/2019 Automatic Image Inpainting
10/39
4
2.3 Image Types
Bit Depth Colours Available
1-bit Black and White
2-bit 4 colours
4-bit 16 colours
8-bit 256 colours
8-bit greyscale 256 shades of grey
16-bit 32768 colours
24-bit 16.7 million colours
32-bit 16.7 million+ 256 Levels of transparency
The number of colours in an image is determined by the number of bits in an image and the
formula is given by 2nwhere n is the number of bits
An illustration for a 24 -bit image is described below
2.3.1 24-bit image - 16 million colours
With a 24 bit image, you have 16 million
colours, made up from 256 shades of red,
256 shades of green and 256 shades of blue.
All the colours are made up from varying
amounts of these primary colours, so for
example, 0,0,0 would be black and
255,255,255 would be white. 255, 0, 0 is red.
0, 255, 0 is green and 0, 0,255 is blue.255,
255, 0 makes yellow, 255, 0,255 makes
magenta and 0,255,255 makes cyan. Fig 2.3 24 bit color Combinations
Each value of 0 - 255 takes up 8 bits, so the total amount of space to define the colour of each
pixel is 24 bits
-
7/23/2019 Automatic Image Inpainting
11/39
5
2.4 Point Operations
Point operations help in modifying the pixels of an image independent of neighboring pixels.
It helps in determining the particular pixel on the basis of its color. The main aim of
discussing point operation is that in the inpainting, the selection of image coordinates will be
done which is having a particular color is described in the later chapter
Some of the operations which can be performed by point operations are:
RGB Image conversion to grey image
RGB Image to single color image conversion Inversion of Image Modifying some pixels on the basis of colours
The operations mentioned above is performed on each pixel, gives a resultant image with
required operation.
2.5 Convolution Operations
Convolutionis a common image processing technique that changes the intensities of a pixel
to reflect the intensities of the surrounding pixels. A common use of convolution is to create
image filters. Using convolution, you can get popular image effects like blur, sharpen, and
edge detection
2.5.1 Convolution Kernels
The height and width of the kernel do not have to be same, though they must
both be odd numbers. The numbers inside the kernel are what impact the
overall effect of convolution. The kernel (or more specifically, the values held
within the kernel) is what determine how to transform the pixels from the
original image into the pixels of the processed image. Fig 2.4 Kernel
-
7/23/2019 Automatic Image Inpainting
12/39
6
Convolution is a series of operations that alter pixel intensities depending on the intensities of
neighboring pixels. The kernel provides the actual numbers that are used in those operations.
Using kernels to perform convolutions is known as kernel convolution.
Convolutions are per-pixel operationsthe same arithmetic is repeated for every pixel in the
image. Bigger images therefore require more convolution arithmetic than the same operation
on a smaller image. A kernel can be thought of as a two-dimensional grid of numbers that
passes over each pixel of an image in sequence, performing calculations along the way. Since
images can also be thought of as two-dimensional grids of numbers, applying a kernel to an
image can be visualized as a small grid (the kernel) moving across a substantially larger grid
(the image).
The numbers in the kernel represent the amount by which to multiply the number underneath
it. The number underneath represents the intensity of the pixel over which the kernel element
is hovering. During convolution, the center of the kernel passes over each pixel in the image.
The process multiplies each number in the kernel by the pixel intensity value directly
underneath it. This should result in as many products as there are numbers in the kernel (per
pixel). The final step of the process sums all of the products together, divides them by the
amount of numbers in the kernel, and this value becomes the new intensity of the pixel that
was directly under the center of the kernel.
Fig 2.5 Convolution kernel modifying a pixel
-
7/23/2019 Automatic Image Inpainting
13/39
7
Even though the kernel overlaps several different pixels (or in some cases, no pixels at all),
the only pixel that it ultimately changes is the source pixel underneath the center element of
the kernel. The sum of all the multiplications between the kernel and image is called the
weighted sum. Since replacing a pixel with the weighted sum of its neighboring pixels can
frequently result in much larger pixel intensity (and a brighter overall image), dividing the
weighted sum can scale back the intensity of the effect and ensure that the initial brightness
of the image is maintained. This procedure is called normalization. The optionally divided
weighted sum is what the value of the center pixel becomes. The kernel repeats this
procedure for each pixel in the source image.
The data type used to represent the values in the kernel must match the data used to represent
the pixel values in the image. For example, if the pixel type is float, then the values in the
kernel must also be float values.
-
7/23/2019 Automatic Image Inpainting
14/39
-
7/23/2019 Automatic Image Inpainting
15/39
9
3.2 Total Variational (TV) inpainting model
Chan and Shen proposed two image-inpainting algorithms. The Total Variational [4] (TV)
inpainting model uses an Euler-Lagrange equation and inside the inpainting domain the
model simply employs anisotropic diffusion based on the contrast of the isophotes. This
model was designed for inpainting small regions and while it does a good job in removing
noise, it does not connect broken edges.
3.3 Curvature-Driven Diffusion (CDD) model
The Curvature-Driven Diffusion (CDD) model [4] extended the TV algorithm to also take
into account geometric information of isophotes when defining the strength of the diffusion
process, thus allowing the inpainting to proceed over larger areas. CDD can connect some
broken edges, but the resulting interpolated segments usually look blurry.
3.4 Teleas Inpainting Algorithm
A Telea [4] proposed a fast marching algorithm that can be looked as the PDE based
approach without the computational overheads. It is considerably fast and simple to
implement than other PDE based methods, this method produces very similar results
comparable to other PDE methods.
The algorithm propagating estimator that used for image smoothness into image gradient
(simplifies computation of flow), the algorithm calculate smoothness of image from a known
image neighborhood of the pixel as a weighted average to inpaint, the FMM inpaint the nearpixels to the known region first which is similar to the manner in which actual inpainting is
carried out , and maintains a narrow band pixels which separates known pixels from
unknown pixels, and also indicates which pixel will be inpainted next.
The limitation of this method is producing blur in the result when the region to be inpainted
thicker than 10 pixels
-
7/23/2019 Automatic Image Inpainting
16/39
10
3.5 Exemplar based methods
Exemplar based methods are becoming increasingly popular for problems such as denoising,
super resolution, texture synthesis, and inpainting. The common theme of these methods is
the use of a set of actual image blocks, extracted either from the image being restored, or
from a separate training set of representative images, as an image model. In the case of
inpainting, the approach is usually to progressively replace missing regions with the best
matching parts of the same image, carefully choosing the order in which the missing region is
filled to minimize artifacts. We can go for an inpainting method that represents missing
regions as sparse linear combinations of other regions in the same image (in contrast to, in
which sparse representations on standard dictionaries, such as wavelets, are employed),
computed by minimizing a simple functional.
3.6 Convolution Based Method (Oliverias Algorithm[4])
Images may contain textures with arbitrary spatial discontinuities, but the sampling theorem
constraints the spatial frequency content that can be automatically restored. Thus, for the case
of missing or damaged areas, one can only hope to produce a plausible rather than an exact
reconstruction. Therefore, in order for an inpainting model to be reasonably successful for a
large class of images the regions to be inpainted must be locally small. As the regions
become smaller, simpler models can be used to locally approximate the results produced by
more sophisticated ones. Another important observation used in the design of our algorithm
is that the human visual system can tolerate some amount of blurring in areas not associated
to high contrast edges. Thus,
let be a small area to be inpainted and let be its boundary. Since is small, the
inpainting procedure can be approximated by an isotropic diffusion process that propagates
information from into . A slightly improved algorithm reconnects edges reaching ,
removes the new edge pixels from (thus splitting into a number of smaller sub-regions),
and then performs the diffusion process as before. The simplest version of the algorithm
consists of initializing by clearing its color information and repeatedly convolving the
-
7/23/2019 Automatic Image Inpainting
17/39
11
region to be inpainted with a diffusion kernel. is a one-pixel thick boundary and the
number of iterations is independently controlled for each inpainting domain by checking if
none of the pixels belonging to the domain had their values changed by more than a certain
threshold during the previous iteration. Alternatively, the user can specify the number of
iterations. As the diffusion process is iterated, the inpainting progresses from into .
Convolving an image with a Gaussian kernel (i.e., computing weighted averages of pixels
neighborhoods) is equivalent to isotropic diffusion (linear heat equation).The algorithm uses
a weighted average kernel that only considers contributions from the neighbor pixels (i.e., it
has a zero weight at the center of the kernel). The pseudo code of this algorithm and two
diffusion kernels is shown below
Fig 3.1 Pseudo code for the fast inpainting algorithm
Two diffusion kernels used with the algorithm. a = 0.073235,
b = 0.176765, c = 0.125.
Limitations are:
Applicable only for small scratches Much iterations are required
-
7/23/2019 Automatic Image Inpainting
18/39
12
3.7 Color Match Inpainting
It is basically used for removing scratches from old image by marking the scratch by the
color which is not used in the image.
Below is the Algorithm:
The area to be inpainted is colored using pencil tool. The color colored is matched for each pixel. If the color matches, the nearby 8 pixels surrounding the pixels are seen. The center pixel is replaced by any of the pixel which do not have the pencil color
The algo works fine for small scratches
Drawbacks are:
Not Applicable for large area inpainting Cannot remove objects
3.8 Right Left Shift Blur
This is applicable for symmetric images and can be used to remove the scratches and objects
from the image.
Below is the algorithm:
The object/scratch to be removed is selected by a rectangular tool The area selected copies the half pixels from the right and half from the left Finally two or three times the convolution is done in order to produce the inpainted
image
The technique works fine for symmetric images like sceneries
The drawback with this are:
Blurred area produced Fails for a non-symmetric image
-
7/23/2019 Automatic Image Inpainting
19/39
13
4.Source Code
The entire coding is done on java as it is platform independent and provides appropriate
image libraries to manipulate image.
This chapter will provide complete source code of my project
;
..G2D;
..G; ...BI;
..*;
..F;
..*;
...*;
..*;
..I;
A
JF ;JB ;
;
C ;
JI 1,2,3,4;
JFC ,1;
F ,1;
BI I;G2D ;
J ;
D ;
JI ;
=0,=20;
1;
JB ,;
4.1Creating Gui ()
= .D().();
= JF();
= JB();
= J("F");
1= J("I"); //J I
-
7/23/2019 Automatic Image Inpainting
20/39
-
7/23/2019 Automatic Image Inpainting
21/39
15
(E )
.();
(AE )
(.()==2)
=0;
= JFC();
= .D();
(== JFC.AEI)
= .F();
I=I();
(!=) //
.();
=()(.()/2)(I.()/2);
=()(.H()/2)(I.H()/2);
= JI(I,,);
.();
.();
(.()==3)
1= JFC();
1 = 1.D();
(1== JFC.AEI)
1=1.F();
1=1.A();
I(I,1);
4.2 Image Panel CreationFile named JImagePanel.java
Creats Panel and loads image for the first time
packageimageprocessing;
importjava.awt.Graphics;
importjava.awt.Graphics2D;
importjava.awt.image.BufferedImage;
-
7/23/2019 Automatic Image Inpainting
22/39
16
importjavax.swing.JPanel;
classJImagePanel extendsJPanel
{
/**
**/
privatestaticfinallongserialVersionUID= 1L;
privateBufferedImage image;
intx, y;
Graphics2D g;
publicJImagePanel(BufferedImage image, intx, inty)
{
super();
this.image= image;
this.x= x;
this.y= y;
}
protectedvoidpaintComponent(Graphics g)
{
super.paintComponent(g);
Graphics2D g2d = (Graphics2D)g;
2.I(,,, );}
}
4.3 Load New Image to PanelFile named Loadimage.java
New Panel is created and is added to the frame
packageimageprocessing;
importjava.awt.Dimension;
importjava.awt.Toolkit;
importjava.awt.image.BufferedImage;
importjavax.swing.JFrame;
publicclassLoadimage { //Generates
new panel with the image and adds to frame
JImagePanel panel;
Loadimage(BufferedImage tempimage,JFrame f)
{
Dimension dm = Toolkit.getDefaultToolkit().getScreenSize();
if(image.panel!=null) //inorder to
remove the previous content of panelimage.panel.setVisible(false);
if(panel!=null)
panel.setVisible(false);
intx=(int)(dm.getWidth()/2)-(tempimage.getWidth()/2);
inty=(int)(dm.getHeight()/2)-(tempimage.getHeight()/2);
panel=newJImagePanel(tempimage,x,y);
image.panel=panel;
f.add(image.panel);
}
}
-
7/23/2019 Automatic Image Inpainting
23/39
17
packageimageprocessing;
importjava.awt.Dimension;
importjava.awt.Toolkit;
importjava.awt.event.ActionEvent;
importjava.awt.event.ActionListener;importjava.awt.event.MouseEvent;
importjava.awt.event.MouseListener;
importjava.awt.event.MouseMotionListener;
importjava.awt.image.BufferedImage;
importjava.awt.image.BufferedImageOp;
importjava.awt.image.ConvolveOp;
importjava.awt.image.Kernel;
importjavax.swing.JFrame;
importjavax.swing.JMenu;
importjavax.swing.JMenuItem;
publicclassConvolution extendsJMenu implements
ActionListener,MouseListener,MouseMotionListener
{
privatestaticfinallongserialVersionUID= 1L;
JMenuItem destruct,inpaint,oliveria,interpulate,shiftmap,pencil;
publicstaticBufferedImage tempimage,tempimage1;
JImagePanel panel;
JFrame fr;
image im;
intval,temp1=0;
int[] colors= newint[100000];
inttemp=0,teval=16646144,ix=0,iy=0,fx=0,fy=0;
publicConvolution(String s1,JFrame fr)
{
setText(s1);destruct=newJMenuItem("Destruct");
pencil=newJMenuItem("Pencil");
inpaint=newJMenuItem("Inpaint");
oliveria= newJMenuItem("Oliveria");
interpulate=newJMenuItem("InterPulate");
shiftmap=newJMenuItem("ShiftMap");
add(inpaint);add(oliveria);add(shiftmap);add(destruct);add(pencil);
this.fr=fr;
destruct.addActionListener(this);
pencil.addActionListener(this);
inpaint.addActionListener(this);
oliveria.addActionListener(this);
interpulate.addActionListener(this);shiftmap.addActionListener(this);
}
@Override
publicvoidactionPerformed(ActionEvent e2)
{
tempimage=image.loadImg;
if(e2.getSource()==destruct)
{
temp1=0;
image.panel.addMouseListener(this);
}
if(e2.getSource()==pencil)
{
image.panel.addMouseMotionListener(this);
-
7/23/2019 Automatic Image Inpainting
24/39
-
7/23/2019 Automatic Image Inpainting
25/39
19
intm=1;
for(intj=fx;j>((ix+fx)/2);j--)
{
intvalue =tempimage.getRGB(j+m, i);
tempimage.setRGB(j, i, value);
m=m+2;}
}
newLoadimage(tempimage,fr);
intx=0;
while(x
-
7/23/2019 Automatic Image Inpainting
26/39
20
val[2]=tempimage.getRGB(j+1, i-1)& 0xFFFFFF;
val[3]=tempimage.getRGB(j-1, i)& 0xFFFFFF;
val[4]=tempimage.getRGB(j, i)& 0xFFFFFF;
val[5]=tempimage.getRGB(j+1, i)& 0xFFFFFF;
val[6]=tempimage.getRGB(j-1, i+1)& 0xFFFFFF;
val[7]=tempimage.getRGB(j, i+1)& 0xFFFFFF;val[8]=tempimage.getRGB(j+1, i+1)& 0xFFFFFF;
intk=0;
sum=0;
sum1=0;
sum2=0;
for(k=0;k>16) & 0xFF);
intgreen= ((val[k]>>8) & 0xFF);
intblue=((val[k]>>0)& 0xFF);
sum = sum+(elements[k]*blue);
sum1=sum1+(elements[k]*green);
sum2=sum2+(elements[k]*red);
}
intsum3=0;
sum3=0xFF000000+((int)sum2
-
7/23/2019 Automatic Image Inpainting
27/39
21
@Override
publicvoidmouseReleased(MouseEvent arg0) {
BufferedImage tempimagesel = new
BufferedImage(tempimage.getWidth(), tempimage
.getHeight(), tempimage.getType());
for(inti = 0;i
-
7/23/2019 Automatic Image Inpainting
28/39
22
5.Results
This chapter will show you the results we obtained by applying some of the inpaintingtechniques
5.1 Experiment 1:
Fig 5.1 Original Sea Boat Image
Objective: To remove the boat completely
Algorithm to be applied: Right-left-shift Blur
-
7/23/2019 Automatic Image Inpainting
29/39
23
Fig 5.2 Boat Selection
Fig 5.3 Boat Removed5.1.1 Results
Boat Successfully Removed
-
7/23/2019 Automatic Image Inpainting
30/39
24
5.2 Experiment 2
Fig 5.4 Original trees Image
Objective: To remove the last tree
Algorithm to be applied: Right-left-shift Blur
The image is symmetric type so we proceed with Right-left-shift Blur
-
7/23/2019 Automatic Image Inpainting
31/39
25
Fig 5.5 Tree Selection
Fig 5.6 Tree removed
5.2.1 Result:
Tree removed successfully
-
7/23/2019 Automatic Image Inpainting
32/39
26
5.3 Experiment 3
Fig 5.7 Original Sea Beach Image
Objective: To remove the people Sitting in the beach
Algorithm to be applied: Right-left-shift Blur
The image is symmetric type so we proceed with Right-left-shift Blur
-
7/23/2019 Automatic Image Inpainting
33/39
27
Fig 5.8 People selected
Fig 5.9 People Removed
5.3.1 Results:
People removed Accuracy: 100%
-
7/23/2019 Automatic Image Inpainting
34/39
28
5.4 Experiment 4
Fin 5.10 Lincon Photo with Crack
Objective: Crack Removal
Alorithm Applied: Color Match Inpainting
This can be applied to any image which is having smaller cracks/scratches
-
7/23/2019 Automatic Image Inpainting
35/39
29
Fig 5.11 Crack Selected
Fig 5.12 Crack Removed
5.4.1 Results:
Crack Removed Accuracy : 80%
-
7/23/2019 Automatic Image Inpainting
36/39
30
5.5 Experiment 5
Fig 5.13 Akash Original Image
Fig 5.14 Manual Scratching Done Fig 5.15 Scratches Removed
Objective: To remove Scratches
Algorithm Applied: Color Match Algo
Results:
Scratches Removed Accuracy: 100%
-
7/23/2019 Automatic Image Inpainting
37/39
Future Improvements
The interpolation technique for 2D matrix can be used in determining the scratches/noises in
the image and removing it automatically
Value of each pixel of the image will be found The value of the pixel would then be interpolated by the nearby value The error range of the interpolated value will calculated and would be added and
subtracted with the interpolated value in order to obtain the limits of safe region
If the value in the first step lies in the range the value will not be replaced by theinterpolated value else replace it by the interpolating value
Use of Artificial intelligence can be combined with image processing in order to produce
more accurate inpainted image
A design of a convolution matrix which would detect the scratches and would remove it
automatically
-
7/23/2019 Automatic Image Inpainting
38/39
Discussion and Conclusion
In this report we have described and implemented inpainting algorithms which removes
unwanted objects from the image. Different inpainting algorithms were used to perform the
same purpose. The most common thing in all the algorithm is the selection of region where
the inpainting is to be done
Algorithm like Shift map removed the unwanted object from the symmetrical images like
sceneries whereas algorithm like oliveria is applicable for inpainting scratches which
practically takes smaller area.
The point algorithm was used earlier to remove scratches in small area and the marking of
the scratch was done by red color.
-
7/23/2019 Automatic Image Inpainting
39/39
References
1, , , ., , ., , . . ,
12.
2, ., , . .
0011, 2000.
3, ., , . ().
003, . 2000.
Gonzalez, Digital Image Processing-Gonzalez, Englewood, N.J., Prentice Hall, 2 Edition.
Wilhelm Burger, Digital Image Processing An Algorithmic introduction using java, First
Edition, Springer,2008
[4]Manuel M. Oliveira, Brian Bowen, Richard McKenna, Yu-Sung Chang, Fast Digital
Image Inpainting, September 3-5, 2001