chapter 4 optimization of wavelet and bilateral filter based...
TRANSCRIPT
![Page 1: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/1.jpg)
Chapter 4
Optimization of Wavelet and
Bilateral Filter based Denoising
Models using Genetic Algori thm
4.1 Introduction
Algorithms for function optimization are generally limited to convex regular func
tions. However, many functions are multimodal, discontinuous and nondifferen-
tiable. Stochastic sampling methods have been used to optimize these functions.
Whereas traditional search techniques use characteristics of the problem to deter
mine the next sampling point (e.g., gradients, hessians, linearity and continuity),
stochastic search techniques make no such assumptions. Instead, the next sampled
points are determined based on stochastic sampling decision rules rather than a set
of deterministic decision rules. Genetic algorithms have been used to solve difficult
problems with objective functions that do not possess "nice" properties such as
continuity, differentiability, satisfaction of the Lipschitz Condition, etc. [97]-[100].
These algorithms maintain and manipulate a family, or population of solutions
and implement a "survival of the fittest" strategy in their search for better so
lutions. This provides an implicit as well as explicit parallelism that allows for
the exploitation of several promising areas of the solution space at the same time.
The implicit parallelism is due to the schema theory developed by fioiland, while
the explicit parallelism arises from the manipulation of a population of points -
96
![Page 2: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/2.jpg)
4.2. Gene t ic Algor i thms 97
the evaluation of the fitness of these points is easy to accomplish in parallel.
This provides the genetic algorithm with a local improvement operator which
is shown in [101], can greatly enhance the performance of the genetic algorithm.
Many researchers have shown that GAs perform well for a global search but per
form very poorly in a localized search [97], [100]-[102]. GAs are capable of quickly
finding promising regions of the search space but may take a relatively long time
to reach the optimal solution.
Genetic algorithms are different from normal search methods employed in
engineering optimization in the following ways:
(i) GAs work with a coding of the parameter set not the parameters themselves,
(ii) GAs search from a population of points, not a single point,
(iii) GAs use probabilistic transition rules, not deterministic transition rules.
A typical GA search starts with a random population of individuals. Each
individual (a string) is a coded set of parameters that we want to optimize. An
objective (fitness) function is used to determine how fit these individuals are. A
suitable selection strategy is employed to select the individuals that will reproduce.
Reproduction (or a new set of individuals) is achieved by using crossover and
mutation operators. The GA then manipulates the most promising strings in its
search for improved solutions.
Section 4.2 presents the basic genetic algorithm and section 4.3 briefly de
scribes the proposed approach and its Matlab implementation and discussion on
the results after the use of the GA is made in section 4.4.
4.2 Genetic Algorithms
Genetic algorithms search the solution space of a function through the use of
simulated evolution, based on the survival of the fittest strategy. In general, the
fittest individuals of any population tend to reproduce and survive to the next
generation, thus improving successive generations. However, inferior individuals
can, by chance, survive and also reproduce. Genetic algorithms have been shown
to solve linear and nonlinear problems by exploring all regions of the state space
![Page 3: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/3.jpg)
4.2. Genetic Algorithms 98
and exponentially exploiting promising areas through mutation, crossover and
selection operations applied to individuals in the population [100].
Though details of implementation of the algorithm are varied but a generic
view of a GA would include:
1. [Start] Generate random population of n chromosomes (suitable solutions
for the problem). Initialization of the initial population (either randomly or
from a best guess or previous partial solution).
2. [Fitness] Evaluate the fitness f{x) of each chromosome x in the population.
3. [New population] Create a new population by repeating following steps until
the new population is complete.
4. [Selection] Select two parent chromosomes from a population according to
their fitness (the better the fitness, the bigger chance to be selected).
5. [Crossover] With a crossover probability cross over the parents to form new
offspring (children). If no crossover was performed, offspring is the exact
copy of parents.
6. [Mutation] With a mutation probability mutate new offspring at each locus
(position in chromosome).
7. [Accepting] Place new offspring in the new population.
8. [Replace] Use newly generated population for a further run of the algorithm.
9. [Test for termination] If the condition for termination is satisfied, stop, and
return the best solution in current population, otherwise go to next step.
10. [Loop] Go to step 2.
The use of a genetic algorithm requires the determination of six fundamen
tal issues: chromosome representation, selection function, the genetic operators
making up the reproduction function, the creation of the initial population, ter
mination criteria, and the evaluation function. These are discussed in subsequent
sections.
![Page 4: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/4.jpg)
4.2. Genetic Algorithms 99
4.2.1 Solution Representation
For any GA, a chromosome representation is needed to describe each individual
in the population of interest. The representation scheme determines how the
problem is structured in the GA and also determines the genetic operators that
are used. Each individual or chromosome is made up of a sequence of genes from
a certain alphabet. An alphabet could consist of binary digits (0 and 1) and
floating point numbers, integers, symbols (i.e.. A, B, C, D), matrices, etc. In
Holland's original design, the alphabet was limited to binary digits. Since then,
problem representation has been the subject of much investigation. It has been
shown that more natural representations are more efficient and produce better
solutions [100]. One useful representation of an individual or chromosome for
function optimization involves genes or variables from an alphabet of floating point
numbers with values within the upper and lower bounds of variables. Michalewicz
[100] has done extensive experimentation comparing real-valued and binary GAs
and shown that the real-valued GA is an order of magnitude more efficient in
terms of CPU time. He also shows that a real-valued representation moves the
problem closer to the problem representation which offers higher precision with
more consistent results across replications [100].
4.2.2 Selection Function
The selection of individuals to produce in successive generations plays an extremely
important role in genetic algorithm. A probabilistic selection is performed based
upon the individual's fitness such that the better individuals have an increased
chance of being selected. An individual in the population can be selected more
than once with all individuals in the population having a chance of being se
lected to reproduce into the next generation. There are several schemes for the
selection process: roulette wheel selection and its extensions, scaling techniques,
tournament, elitist models and ranking methods [98], [100]. A common selection
approach assigns a probability of selection, Pj, to each individual, j based on its
fitness value. A series of N random numbers are generated and compared against
the cumulative probability Cj = Yl]=i P] ' of * hc population. The appropriate in
dividual, ?:, is selected and copied into the new population if Q - i < [/(0,1) < Q .
![Page 5: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/5.jpg)
4.2. Genet ic Algor i thms 100
Various methods exist to assign probabihties to individuals: roulette wheel, linear
ranking and geometric ranking. Roulette wheel, developed by Holland [99] was
the first selection method. The probability, Pi, for each individual is defined by:
P[Individual i is choosen] = —PovLze— i^-^) E,=i F,
where Fj equals the fitness of individ\ial i. The use of roulette wheel selec
tion limits the genetic algorithm to maximization since the evaluation function
must map the solutions to a fully ordered set of values on 3?"*". Extensions, such as
windowing and scaling, have been proposed to allow for minimization and nega
tivity. Ranking methods only require the evaluation function to map the solutions
to a partially ordered set, thus allowing for minimization and negativity. Ranking
methods assign Pi based on the rank of solution i when all solutions are sorted.
Normalized geometric ranking [103], defines P, for each individual by:
P{Selecting the z"* individual) = q'{l - qY~^ (4.2)
where
q ~ the probability of selecting the best individual.
r = the rank of the individual, where is the best.
P = the population size.
Tournament selection, like ranking methods, only requires the evaluation
function to map solutions to a partially ordered set, however, it does not assign
probabilities. Tournament selection works by selecting j individuals randomly,
with replacement, from the population, and inserts the best of the j into the new
population. This procedure is repeated until TV individuals have been selected.
4.2.3 Genetic Operators
Genetic Operators provide the basic search mechanism of the GA. The operators
are used to create new solutions based on existing solutions in the population.
There are two basic types of operators: crossover and mutation. Crossover takes
two individuals and produces two new individuals while mutation alters one indi
vidual to produce a single new solution. The application of these two basic types
![Page 6: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/6.jpg)
4.2. Genet ic Algorithms 101
of operators and their derivatives depends on the chromosome representation used.
Let X and Y be two m-dimensional row vectors denoting individuals (par
ents) from the population. For X and Y binary, the following operators are defined
binary mutation and simple crossover.
Binary mutation flips each bit in every individual in the population with
probability pm according to equation (4.3).
.X = < (4.3) 1-Xi, i{U{0,l)<Pm
Xi, otherwise
Simple crossover generates a random number r from a uniform distribution
from 1 to m and creates two new individuals X and Y according to equations
(4.4) and (4.5).
.X = <
Xi, if i < r
y,, otherwise
Vi if z < Vi
(4.4)
(4.5) Xi, otherwise
Operators for real-valued representations, i.e., an alphabet of floats, wore
developed by Michalcwicz [100]. For real X and Y the following operators are
defined: uniforiri mutation, non-uniform mutation, multi-non-uniform mutation,
boundary mutation, simple crossover, arithmetic crossover and heuristic crossover.
Let a, and bi be the lower and upper bound, respectively, for each variable i.
Uniform mutation randomly selects one variable, j and sets it equal to an uniform
random number U{ai,bi):
X = < (4.6) U{ai,bi), iii = j
Xi, otherwise
Boundary mutation randomly selects one variable, j , and sets it equal to
either its lower or upper bound, where r = f/(0,1):
![Page 7: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/7.jpg)
4.2. Genetic Algorithms 102
.X- = <
flj, \i i = j , r < 0.5
bi, if z = j , r > 0.5
rr,. otherwise
(4.7)
Non-uniform mutation randomly selects one variable, j , and sets it equal to
an non-uniform random number:
X = <
Xi + {bi-x^)f{G), i f r i < 0 . 5
Xi-{xi + ai)f{G), i f r i > 0 . 5
Xi, otherwise
(4.8)
where
fiG)= [Ml-G
G„ (4.9)
r l , r 2 =: a uniform random number between (0,1),
G = the current generation,
Gmax = the maximum number of generations,
6 = a shape parameter.
The multi-non-uniform mutation operator applies the non-uniform operator
to all of the variables in the parent X. Real-valued simple crossover is identical
to the binary version presented above in equations (4.4) and (4.5). Arithmetic
crossover produces two complimentary linear combinations of the parents, where
r = U{0,1)
X' = rX + {I - r)Y (4.10)
Y' = {l-r)X + rY (4.11)
Heuristic crossover produces an linear extrapolation of the two individuals.
This is the only operator that utilizes fitness information. A new individual, X', is
created using equation (4.12) where r = f/(0,1) and X is better than Y in terms
of fitness. If X' is infcasible i.e., feasibility equals 0 as given by equation (4.14)
then generate a new random number r and create a new solution using equtxtions
![Page 8: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/8.jpg)
4.2. Genetic Algorithms 103
(4.12) and (4.13) otherwise stop. To ensure halting, after t failures, let the children
equal the parents and stop.
X' = X + riX -Y) (4.12)
Y' = X (4.13)
1, if x[ > a,, x[ < bi, \/i feasibility = ^ (4.14)
0, otherwise
4.2.4 Initialization, Termination, and Evaluation Functions
The GA must be provided with an initial population as indicated in step 1 of the
algorithm. The most common method is to randomly generate solutions for the
entire population. However, since GAs can iteratively improve existing solutions
(i.e., solutions from other heuristics and/or current practices) the beginning pop
ulation can be seeded with potentially good solutions, with the remainder of the
population being randomly generated solutions. The GA moves from generation
to generation selecting and reproducing parents until a termination criterion is
met. The most frequently used stopping criterion is a specified maximum num
ber of generations. Another termination strategy involves population convergence
criteria. In general, GAs will force much of the entire population to converge to
a single solution. When the sum of the deviations among individuals becomes
smaller than some specified threshold, the algorithm can be terminated. The al
gorithm can also be terminated due to a lack of improvement in the best solution
over a specified number of generations. Alternatively, a target value for the evalua
tion measure can be established based on some arbitrarily acceptable "threshold".
Several strategies can be used in conjunction with each other. Evaluation func
tions of many forms can be used in a GA, subject to the minimal requirement
that the function can map the population into a partially ordered set. As stated,
the evaluation function is independent of the GA (i.e., stochastic decision rules).
For the optimization of the test function two different representations are
used. A real valued alphabet is employed in conjunction with the selection, mu-
![Page 9: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/9.jpg)
4.3. Proposed Approach 104
tation and crossover operators with their respective options. Also a binary repre
sentation may be used in conjunction with the selection, mutation and crossover
operators with their respective options.
The floating point genetic algorithm (FPGA) is used as the optimization
tool throughout this work.
To optimize the performance, the fitness function is designed in such a man
ner that both the values of the PSNR and IQI should be maximum. The fitness
function is also formed using PSNR and IQI with equal weightagc. The PSNR
is calculated by equation (2.49) and IQI by equation (2.50) for all the considered
models with the considered parameters as discussed above.
4.3 Proposed Approach
Out of 48 models formed through hybridization of wavelet baaed filtering and
bilateral filters in different configurations as reported and discussed in chapter 3,
the best three models are considered for optimization. In addition, three models,
one with only wavelet based filters, the other with only bilateral filter and the third
one as proposed by Zhang and Gunturk [8] are considered for drawing comperative
performance. The parameters of six (three plus three) filter models are optimized
using floating point GA (FPGA).
Structural variations of the models are shown in figs. 4.1 through 4.6. The
models are identified as model nos. 1 to 6 as shown in the figs. 4.1 through 4.6
respectively.
(i) Model with Wavelet thresholding based method only (model no. 1 in fig. 4.1).
(ii) Model with Bilateral filter only (model no. 2 in fig. 4.2).
(iii) Model proposed by Ming Zhang and Bahadur Gunturk (Zhang-Gunturk
method) (model no. 3 in fig. 4.3).
(iv) Hybridized model with bilateral filter after the reconstruction of the image
after the completion of the wavelet decomposition and thresholding (model
no. 4 in fig. 4.4).
![Page 10: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/10.jpg)
4.3. Proposed Approach 105
(v) Hybridized model with bilateral filter before and after the wavelet decom
position, thresholding and reconstruction (model no. 5 in fig. 4.5).
(vi) Hybridized model with bilateral filter before the wavelet decomposition (model
no. 6 in fig. 4.6).
Input . image
r-^ LL - < , , ) - ^ W - < >-^
^ LH -^ly^ w -<T)-
-^ HL -<i>-> w -^y^
U HH -^i)-^ W -<f)-^
7> Output image
I S I BUateral filter
I W I Wavelet thresholding
Figure 4.1: Model No. 1
Input image B Output
image
m Bilateral filter
"y^ I Wavelet thresholding
Figure 4.2: Model No. 2
While working with the models, the parameters w, ad and ar of bilateral
filters are varied over a wide range of values as there is no explicit rules that can
guide the tuning of these parameters. At the same time the threshold value for
the wavelet based filter is also varied.
The following issues are considered to carry out the work:
(i) Different standard availal)le images along witli satellite and tc;I(;scoj)ic images
are considered throughout this work. All the models are applied on each and
![Page 11: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/11.jpg)
4.3. Proposed Approach 106
Input image
Input inuge
> B
KHGHZM)-I H } - > ( D - ® - < [ H
lO-^Q-^wHIH HJTMD- rwHD-J
B
Bilateral filter
I W Wavelet thresholding
Bilateral filter H] I 'W? I Wavelet thresholding
Figure 4.4: Model No. 4
^ B —> LH
^ LL HIHwHIH - (I)-fwH!>~>
H T M I H W H T ) ^
HH h®--[wMH
^ B
Bilateral filter
I W^ I Wavelet thresholding
^ Output image
Input image
Figure 4.3: Model No. 3
p LL ^Q-^[wHlH - LH - < i ) - w -Kiy-^
-^ HL -<i)-^ w Aiy^ U HH -^1)-^ W ^Cf)-
-^ B ^ r>iitpirt
image
Output image
Figure 4.5: Model No. 5
(ii) (a) After wavelet decomposition, soft thresholding or bilateral technique is
applied on each and every subband of the image,
(b) For a particular image and a model, a wide range of values of w, 04
![Page 12: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/12.jpg)
4.4. Results and Discussion 107
Input image B
L L ha>-s-®- L H
H L
H H
h-0--[wHD-h^H^HD-|-aHw]-©-J
Output image
I Q I Bilateral filter
I W I Wavelet thresholding
Figure 4.6: Model No. 6
and Or including thresholding parameter are considered for evolutionary op
timization using FPGA.
(iii) The threshold value for wavelet based filters is also optimized with FPGA.
(iv) The values of iw, a^ and Or of bilateral filter are aJso optimized using FPGA.
4.4 Results and Discussion
The first model is developed with wavelet based thresholding algorithm [104]. In
this model, the images are decomposed into four subbands. The soft thresholding
method is then employed on each of the four subbands wherein the thresholding
value is optimized using FPGA. As most of the researchers have used ^68 filters for
image denoising, the same has been considered in this work too. The second model
is with bilateral filter [88] only. The parameters, w, aj. and ar are optimized using
FPGA for finding the optimal performance. The third one is the model proposed
by Ming Zhang and Bahadur Gunturk [8]. As proposed by the authors, the images
arc decomposed into four subbands and the both wavelet based and bilateral filters
are used as given in model no. 3 in fig. 4.3.
The models from 4 to 6 are hybridized ones that are reported in the previous
chapter.
In model 4, the images are first decomposed into four subbands using dbS
filters in Matlab. In this level l,h(> wavc lot ba-scnl soft throsholdiug is api:)li(Kl on all
![Page 13: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/13.jpg)
4.4. Results and Discussion 108
the subbands. The results obtained after thresholding are then used to reconstruct
the image. In the next level, bilateral filter is applied to get the final denoised
image.
In model 5, first the image is denoised with bilateral filter followed by decom
position into four subbands using dbS filters. And wavelet thresholding is applied
on all the subbands. The results obtained after thresholding are then used to
reconstruct the image. In the last level, again bilateral filter is applied to get the
final denoised image.
Model 6 is similar to model 5 except there is no bilateral filter at the output
side of model 6. The wavelet thresholding is applied on all the subbands. The
results obtained after thresholding are then used to reconstruct the denoised image.
All the models are experimented with standard pictures like Lena, Barbara,
Einstein, satellite picture like pyramid and two astronomical telescopic images
Astrol and Astro2.
Model
Model 1
Model2
Models
Image
Lena
Barbara
Einstein
Pyramid
Astrol
Astro2
Lena
Barbara
Einstein
Pyramid
Astrol
Astro2
Lena
Barbara
GA optimized
PSNR
39.0803
39.0103
39.0146
36.7269
40.5669
38.4527
43.3514
44.5522
43.3897
46.8700
41.6157
41.8386
36.1404
25.3825
IQI
0.9692
0.9788
0.9748
0.9967
0.9633
0.9503
0.9800
0.9880
0.9846
0.9991
0.8743
0.9513
0.9412
0.9550
Not GA optimized
P S N R
35.1217
34.9942
35.0108
34.2453
35.1464
35.1210
38.3243
39.2712
38.3424
38.7753
39.3953
39.3918
30.4803
17.5528
IQI
0.9222
0.9464
0.9358
0.9925
0.8666
0.8903
0.9260
0.9540
0.9677
0.9953
0.7698
0.9058
0.9465
0.9080
Continued on next page
![Page 14: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/14.jpg)
4.4. Results and E >iscussion 109
Table 4.1 - continued from previous page
Model
Model4
Models
Modeie
Image
Einstein
Pyramid
Astrol
Astro2
Lena
Barbara
Einstein
Pyramid
Astrol
Astro2
Lena
Barbara
Einstein
Pyramid
Astrol
Astro2
Lena
Barbara
Einstein
Pyramid
Astrol
Astro2
GA optimized
P S N R
28.9590
13.7369
12.7607
19.5767
45.0978
50.4463
38.4985
45.1144
45.0999
45.5710
39.4876
41.3640
40.7318
45.4617
40.2209
39.7477
49.2344
44.8573
47.4786
51.4413
48.1736
41.6635
IQI
0.9564
0.8383
0.4413
0.6030
0.9899
0.9974
0.9880
0.9994
0.9550
0.9853
0.9583
0.9755
0.9697
0.9992
0.9998
0.9333
0.9954
0.9907
0.9951
0.9998
0.9909
0.9554
Not GA optimized
P S N R
24.9062
12.3090
4.9742
5.7261
40.6265
40.3110
32.9937
36.9293
34.9593
35.1238
38.6548
39.7834
39.4389
39.3131
36.8370
35.4243
39.5561
39.6969
39.5870
39.7432
39.8931
39.6700
IQI
0.9386
0.7857
0.3355
0.5833
0.9683
0.9777
0.9800
0.9969
0.8227
0.9276
0.9541
0.9677
0.9696
0.9979
0.9998
0.8905
0.9664
0.9777
0.9740
0.9976
0.8539
0.9359
Table 4.1: Performance Comparison of the Different
Models with Different Images
The value of soft-thresholding and w, a,i and ar for models 3 to 6 are opti
mized for optimal performance using FPGA.
![Page 15: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/15.jpg)
4.4. Resul t s and Discussion 110
Floating point GA (FPGA) features used:
(i) Heuristic crossover
(ii) Non-uniform mutation
(iii) Norm.alized geometric select function
Tuned parameters for the FPGA algorithm:
Population Size = 20
Maximum Iterations = 200
Penalty multiplier = 10.
The GA toolbox, GAOT, in Matlab as proposed by C.R. Houck, J. Joines,
and M. Kay [105] is used after modification for the optimization of the problem.
The PSNR and IQI values obtained for all the models with filter parameters
optimized using FPGA for all the images are given in table 4.1.
The performance in terms of PSNR and IQI is illustrated in figs. 4.13 and 4.14
respectively.
The six images, Lena, Barbara, Einstein, Pyramid, Astrol and Astro2, that
are denoised by the six models without FPGA optimization and with FPGA opti
mization, are depicted in the figs. 4.7, 4.8, 4.9, 4.10, 4.11 and 4.12 respectively.
Figure 4.7: Denoised images by Model 1: Upper row is without FPGA and Lower
row is with FPGA
It is evident in figs. 4.13 and 4.14 that there is significant improvement in the
performance of most of the models because of optimization of the filter parameters
using FPGA in lieu of manual trial and error method as reported in the previous
chapter.
![Page 16: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/16.jpg)
4.4. Results and Discussion 111
E , , • " . - . ? - " , j ;
Figure 4.8: Denoised images by Model 2: Upper row is without FPGA and Lower
row is with FPGA
Figure 4.9: Denoised images by Model 3: Upper row is without FPGA and Lower
row is with FPGA
Figure 4.10: Denoised images by Model 4: Upper row is without FPGA and Lower
row is with FPGA
All the models have been found to be capable in denoising all the images
to some level. However, there is some difference in their performance in terms
of PSNR and IQL The comparative performances of FPGA optimized different
models on diffbrcnt tyi)es of images are depicted bellow.
For standard image Lena, model 6 achieved the best PSNR and IQI values
followed by model 4. Models 5, 2, 1 and 3 follow in descending order in terms
of PSNR as is evident in figs. 4.13 and 4.14 and in table 4.1. In the case of
Barbara, the performance of the model 4 is the best in terms of PSNR and IQI,
![Page 17: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/17.jpg)
4.4. Results and Discussion 112
c
Figure 4.11: Denoised images by Model 5: Upper row is without FPGA and Lower
row is with FPGA
Figure 4.12: Denoised images by Model 6: Upper row is without FPGA and Lower
row is with FPGA
followed by model 6 and 2 in terms of PSNR but in terms of IQI model 6 is better
than model 2. For image Einstein, model 6 generates the best PSNR as well as
best IQI.
When the satellite image, Pyramid, is considered, the best model in terms
of PSNR and IQI is again 6 followed by model 2. Models 4 and 5 follow them.
However, all the models except model 3 have performances in terms of IQI only.
For the astronomical telescopic image, Astrol, model 6 gives the best results
in terms of PSNR while model 5 performs best in terms of IQI. And for another
image Astro2, model 4 achieves the best results in terms of PSNR and IQI.
Generally it is observed that the overall performance of model 6 is more
consistent especially in terms of IQI on all types of images foUowed by model 4.
And the performance of model 3 is worst on all the images.
Another general observation of the overall comparative performance among
the six models investigated in this work is that the performance of the filter with
the wavelet based thresholding filters only on all the decomposed subbands of any
image is far better than the model with bilateral filters applied in different ways
![Page 18: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/18.jpg)
4.4. R e s u l t s and Discussion 113
60
50
40
Performance Comparison( PSNR)
GA NGA GA NGA GA NGA GA NGA GA NGA GA NGA
M o d e l l Model2 Models Mode(4 Models Modet6 I
Models—> (GA: GA opt imized, NGA: Not GA opt imized)
• Lena
K Barbara i I
: Bnstein I
K Pyranrid j
s A s t r o l 1
A5tro2 I
Figure 4.13: Performance comparison of the models with and GA optimization in
terms of PSNR
Performance Comparison (IQI)
I.
GA NGA GA NGA GA NGA GA NGA GA NGA GA NGA
M o d e l l Model2 Mode ls Model4 Models Models
Models—> {GA: GA opt imized, NGA: Not GA opt imized)
I Lena
I Barbara
Einstein
i Pyramid
I A s t r o l
Astro2
Figure 4.14: Performance comparison of the models with and GA optimization in
terniB of IQI
along with wavelet based thresholding filters on the decomposed subbands. But
when the bilateral filter is used before or after or both sides of the decomposition
(not on decomposed subbands) followed by reconstruction of an image as used
in models 4, 5 and 6, the performance of the filters improves as also reported in
the previo\is chapter. After FPGA optimization model 4 has transformed to be a
strong competitor for model 6 for most types of the images.
![Page 19: Chapter 4 Optimization of Wavelet and Bilateral Filter based …shodhganga.inflibnet.ac.in/bitstream/10603/94090/14/14_chapter 4.p… · Algorithms for function optimization are generally](https://reader036.vdocuments.mx/reader036/viewer/2022081517/604720fffaf44d38dd48062c/html5/thumbnails/19.jpg)
4.5. Conclusion 114
4.5 Conclusion
The filter parameters of all the filters including hybrid denoising models developed
through hj'bridization in three apparently best configurations are optimized using
FPGA and the performances of the thus GA based optimally designed filters are
tested on different types of noisy images. The performance of the models is eval
uated in terms of PSNR and IQI. Comparison is drawn in the performance of all
the six models with filter parameters optimized using trial and error method with
those with filter parameters optimized using FPGA. It has been observed that
the performance of all the models improves with FPGA optimized filter parame
ters even though the amount of improvement in different models is different. The
overall observation of the results reveals that the use of bilateral filters in diflFerent
configurations along with wavelet based thresholding filters on all the decomposed
subbands worsens the performance of the filter, whereas the bilateral filter be
fore or after or both sides of the decomposition (not on decomposed subbands)
followed by reconstruction of an image as used in models 4, 5 and 6, improves
the performance of the filter in denoising noisy images which is in line with the
inference claimed in previous chapter.
However, amongst the three models 4, 5 and 6, the performance of model 4 is
quite competitive with reference to model 6, which is better and more consistent on
all the images except on Astro2 type images. So, the model 6, wherein the bilateral
filter is applied before decomposition, is recommended as a well competent model
for denoising any type of images.