Download - 14 Simulation Optimization
-
8/13/2019 14 Simulation Optimization
1/54
Simulation optimization
Simulated annealing
Tabu searchGenetic algorithms
-
8/13/2019 14 Simulation Optimization
2/54
2
Introduction Metaheuristics
Let us consider a small but modestly difficult nonlinear
programming problem.
Objective function is complicated that it would be difficult to
determine where the global optimum lies without the benefit
of viewing the plot of the function.
Calculus could be used, but it would require solvingpolynomial equation of the fourth degree. Why fourth degree?
It would be even difficult to see that the function has several
local optima rather than just a global optimum.
.310
000,800,1000,345000,289712)( 2345
xtosubject
xxxxxxfMaximize
-
8/13/2019 14 Simulation Optimization
3/54
3
Introduction Metaheuristics
Analytical solution is difficult to obtain because, this is anexample of non-convex programming problema special typeof problem that typically has multiple local optima.
Hence, we have to use an approximation methoda heuristic
method. One of the popular heuristic method is local improvement
method.
Such a procedure starts with a initial solution (seed value), and
then, at each iteration, searches in the neighborhood of thecurrent trial solution to find better trial solutions.
The process continues until no improved solution can be foundin the neighborhood to the current trial solution.
-
8/13/2019 14 Simulation Optimization
4/54
4
Introduction Metaheuristics
Another name for such techniques is hill climbing procedures.
The procedure keeps climbing higher on the plot of objective
function until it essentially reaches the top of the hill
(assuming that we are solving a maximization problem).
A well-designed local improvement procedure usually will
reach a local optimum (the top of a hill), but it then will stop
even if the local optimum is not the global optimum (the top of
the tallest hill).
-
8/13/2019 14 Simulation Optimization
5/54
5
Introduction Metaheuristics
For our example, let us usegradient search procedure.
Say we start with the initial solution,x = 0.
The local improvement procedure would climb up the hillsuccessively trying larger value ofxtill it essentially reaches
the top of the hill atx = 5at which point the procedure willstop.
Hence, we would be trapped at a local optimum (x = 5), so itwould never reach the global optimum (x = 20).
This is typical of local improvement procedures. Another example of local improvement procedure:A TSP with
sub-tour reversal algorithm.
-
8/13/2019 14 Simulation Optimization
6/54
6
Introduction Metaheuristics
These methods are designed only to keep improving on the
current trial solutions within the local neighborhood of those
solutions.
Once they climb on top of a hill, they must stop because they
cannot climb any higher within the local neighborhood of the
trial solution at the top of that hill.
This, then, becomes the biggest drawback of the local
improvement procedure:
When a well-designed local improvement procedure is applied to
an optimization problem with multiple local optima, it will
converge to one local optimum and stop.
-
8/13/2019 14 Simulation Optimization
7/54
7
Introduction Metaheuristics
One way to overcome this drawback is to restart the local
improvement procedure a number of times from randomly
selected trial solutions.
Restarting from a new part of the feasible region often leads to
a new local optimum.
This, then, increases the chance that the best of the local
optima obtained actually will be the global optimum.
Not surprisingly, this works for smaller problems like the one
we considered.
However, it is much less successful on large problems with
many variables and a complicated feasible region.
-
8/13/2019 14 Simulation Optimization
8/54
8
Introduction Metaheuristics
Say the feasible region has numerous nook and crannies andwe start a local improvement procedure from multiplelocations.
However, only one of these locations will lead to global
optimum. In such a case, restarting from randomly selected initial trial
solutions becomes very haphazard way to reach globaloptimum.
What is needed in such cases is a more structured approachthat uses the information being gathered to guide the searchtoward the global optimum.
This is the role that a metaheuristic plays!
-
8/13/2019 14 Simulation Optimization
9/54
9
Introduction Metaheuristics
Nature of metaheuristics
Metaheuristics is a general kind of solution method that
orchestrates the interaction between local improvement
procedures and higher level of strategies to create a process
that is capable of escaping from local optima and performing a
robust search of the feasible region.
Thus, one key feature of the metaheuristic is its ability to
escape from a local optimum.
After reaching (or nearly reaching) a local optimum, different
metaheuristics execute this escape in different ways.
-
8/13/2019 14 Simulation Optimization
10/54
10
Introduction Metaheuristics
However, a common characteristic is that the trial solution that
immediately follow a local optimum are allowed to be inferior
to this local optimum.
The advantage of a well-designed metaheuristic is that it tends
to move relatively quickly toward very good solutions, so it
provides a very efficient way of dealing with large
complicated problems.
The disadvantage is that there is no guarantee that the best
solution found will be optimum solution or even a nearoptimal solution.
-
8/13/2019 14 Simulation Optimization
11/54
11
Simulated annealing
Local improvement procedure we described starts withclimbing the current hill and coming down from it to find thetallest hill.
Instead, simulated annealing focuses mainly on searching for
the tallest hill. Since the tallest hill can be anywhere in the feasible region, the
emphasis is on taking steps in random directions.
Along the way, we reject some, but not all, steps that would godownward rather than upward.
Since most of the accepted steps are going upward, the searchwill gradually gravitate toward those parts of feasible regioncontaining the tallest hills.
-
8/13/2019 14 Simulation Optimization
12/54
12
Simulated annealing
Therefore, the search process gradually increases the emphasis
on climbing upward by rejecting an increasing proportion of
steps that go downward.
Like any other local improvement procedure, simulated
annealing moves from current solution to an immediate
neighbor in the local neighborhood of this solution.
How is an immediate neighbor is selected?
Let,Zc
=objective function value for the current trial solution;
Zn=objective function value for the current candidate to be
the next trial solution;
-
8/13/2019 14 Simulation Optimization
13/54
13
Simulated annealing
T= a parameter that measures the tendency to accept the
current candidate to be the next trial solution if this candidate
is not an improvement on the current trial solution.
Move selection rule
Among all the immediate neighbors of the current trial
solution, select one randomly to become the current candidate
to be the next trial solution.
Assuming the objective is maximization of the objective
function, accept or reject this candidate to be the next trial
solution as per the following rule:
-
8/13/2019 14 Simulation Optimization
14/54
14
Simulated annealing
1. IfZn>=Zc, always accept this candidate.
2. IfZn
-
8/13/2019 14 Simulation Optimization
15/54
15
Simulated annealing
Looking at the probability expression, we will usually accept astep that is only slightly downhill, but seldom will accept asteep downward step.
Starting with a relatively large value of Tmakes the
probability of acceptance relatively large, which enables thesearch to proceed in almost random directions.
Gradually decreasing the value of Tas the search continuesgradually decreases the probability of acceptance, whichincreases the emphasis on mostly climbing upward.
Thus the choice of Tover time controls the degree ofrandomness in the process of allowing downward steps.
-
8/13/2019 14 Simulation Optimization
16/54
16
Simulated annealing
Usual method of implementing the move selection rule to
determine whether a particular downward step will be
accepted is to compare a U[0, 1]random variable (r.v.) to the
probability of acceptance.
So:
Generate a U ~ U[0, 1]r.v.
If U < Prob {acceptance}accept the downward step;
Otherwise reject the step.
-
8/13/2019 14 Simulation Optimization
17/54
17
Simulated annealing
Reason for using this probability of acceptance formula isbecause the process is based on the analogy to the physicalannealing process.
Physical annealing process initially involves melting the metalor glass at a high temperature.
Then slowly cooling the substance until it reaches a low-energy stable with desirable physical properties.
At any given temperature Tduring the process, the energylevel of the atoms in the substance is fluctuating but tending todecrease.
A mathematical model of how the energy level fluctuatesassumes that changes occur randomly except that only some ofthe increases are accepted.
-
8/13/2019 14 Simulation Optimization
18/54
18
Simulated annealing
In particular, the probability of accepting an increase when thetemperature is Thas the same form as forProb {acceptance}in the move selection rule.
Hence, just as for physical annealing process, a key question
when designing a simulated annealing algorithm for anoptimization problem is to select an appropriate temperature
scheduleto use.
This schedule needs to specify the initial, relatively large valuefor the temperature, as well as subsequent progressively
smaller values.
It also needs to specify how many moves (iterations) should bemade at each value of T.
-
8/13/2019 14 Simulation Optimization
19/54
19
Simulated annealing
Outline of the basic simulated annealing algorithm
Initialization: Start with a feasible initial trial solution.
Iteration: Use the move selection rule to select the new trialsolution. If none of the immediate neighbors of the current trial
solution are accepted, the algorithm is terminated.
Check the current temperature schedule: when the desirednumber of iterations have been performed at the current value
of T, decrease Tto the next value in the temperature schedule
and resume performing iterations at this next value.
-
8/13/2019 14 Simulation Optimization
20/54
20
Simulated annealing
Stopping rule: when the desired number of iterations have
been performed at the smallest value of Tin the temperature
schedule, stop. Algorithm is also stopped when none of the
immediate neighbors of the current trial solution are accepted.
Accept the best trial solution found at any iteration (including
for larger values of T) as the final solution.
-
8/13/2019 14 Simulation Optimization
21/54
21
Simulated annealing
TSP example
Initial trial solution: we may enter any feasible solution(sequence of cities on the tour), perhaps by randomlygenerating the sequence. It might be helpful to enter a goodfeasible solution as initial trial solution. e.g. 1-2-3-4-5-6-7-1.
Neighborhood structure: An immediate neighbor of the currenttrial solution is one that is reached by making a sub-tourreversed.
We must, however, rule out the sub-tour reversal that simplyreverses the direction of tour provided by the current trial
solution.
-
8/13/2019 14 Simulation Optimization
22/54
22
Simulated annealing
TSP example
Random selection of immediate neighbor: Selecting the sub-tour to be reversed requires selecting the slot in the currentsequence of cities where the sub-tour currently begins and then
the slot where the sub-tour currently ends. The ending slot must be somewhere after the beginning slot,
excluding the last slot.
We can use random numbers to give equal probabilities toselecting any of the eligible beginning slots and then any of theeligible ending slots.
If this selection turns out to be infeasible, then the process isrepeated until a feasible selection is made.
-
8/13/2019 14 Simulation Optimization
23/54
23
Simulated annealing
TSP example
Temperature schedule: Five iterations are performed at each of
the five values of T(T1, T2, T3, T4, T5) in turn where:
T1= 0.2 Z
c, whereZ
cis the objective function value of the
initial trial solution.
And after that, Ti= 0.5Ti-1.
The specified Tvalues are just illustrative (and valid) values.
-
8/13/2019 14 Simulation Optimization
24/54
24
Tabu search
Tabu search begins by using a local search procedure as a localimprovement procedure in the usual sense to find the localoptimum.
That is, initially, we usually allow only improving solutions.
As with Simulated Annealing, the strategy in Tabu search isthat it then continues the search by allowing non-improvingmoves to the best solutions in the neighborhood of the localoptimum solution.
Once a point is reached where better solutions can be found inthe neighborhood of the current trial solution, the localimprovement procedure can be re-applied to find the new localoptimum.
-
8/13/2019 14 Simulation Optimization
25/54
25
Tabu search
This version of local improvement procedure is sometimesreferred to as steepest ascent/ mildest descent approach.
Each iteration selects the available move that goes furthest upthe hill, or, when an upward move is not available, selects a
move that drops least down the hill. The danger with this approach is that after moving away from
a local optimum, the process will cycle right back to the samelocal optimum.
To avoid this, a tabu search temporarily forbids moves thatwould return to (and perhaps toward) a solution previouslyvisited.
-
8/13/2019 14 Simulation Optimization
26/54
26
Tabu search
A tabu list records these forbidden moves, which are referred
to as tabu moves.
The only exception to forbidding such a move is if it is found
that a tabu move actually is better than the best feasible
solution found so far.
This use of memory to guide the search by using tabu lists to
record some of the recent history of the search is a distinctive
feature of tabu search.
This feature comes from artificial intelligence.
-
8/13/2019 14 Simulation Optimization
27/54
27
Tabu search
Outline of a basic tabu search algorithm
Initialization: Start with a feasible initial trial solution.
Iteration:
Use an appropriate local search procedure to define the
feasible moves into the local neighborhood of the currentsolution.
Eliminate from consideration any move on the current tabu listunless the move would result in a better solution that the besttrial solution found so far.
Determine which of the remaining moves provides the bestsolution.
Adopt this solution as the next trial solution, regardless ofwhether it is better or worse than the current trial solution.
-
8/13/2019 14 Simulation Optimization
28/54
28
Tabu search
Outline of a basic Tabu search algorithm
Update the tabu list to forbid cycling back to what had beenthe current trial solution.
If the tabu list already had been full, delete the oldest member
of the tabu list to provide more flexibility for future moves. Stopping rule:
Use any stopping criteria, such as fixed number of iterations, afixed amount of CPU time, a fixed number of consecutive
iterations without an improvement in the best objectivefunction value.
Also stop at any iteration where there are no feasible moves inthe local neighborhood of the current trial solution.
-
8/13/2019 14 Simulation Optimization
29/54
-
8/13/2019 14 Simulation Optimization
30/54
30
Tabu search
TSP example
Local search algorithm: At each iteration, choose the best
immediate neighbor of the current trial solution that is not
ruled out by the tabu status.
Neighborhood structure: An immediate neighbor of the current
trial solution is one that is reached by making a sub-tour
reversal. Such a reversal requires adding two links and
deleting two other links from the current trial solution.
Form of tabu moves: List the links such that a particular sub-
tour reversal would be tabu if both links to be deleted in this
reversal are on the list.
-
8/13/2019 14 Simulation Optimization
31/54
31
Tabu search
TSP example
Addition of tabu move: At each iteration, after choosing thetwo links to be added to the current solution, also add thesetwo links to the tabu list.
Maximum size of the tabu list: Four (two from each of the twomost recent iterations). Whenever, a pair of links is added to afull list, delete the two links that already have been on the listthe longest.
Stopping rule: Stop after three consecutive iterations withoutan improvement in the best objective function value. Also stopat any iteration where the current solution has no immediatefeasible neighbor.
-
8/13/2019 14 Simulation Optimization
32/54
32
Genetic algorithms
Just as simulated annealing is based on a physicalphenomenon (the physical annealing process), geneticalgorithms are completely based on another natural
phenomenon.
In this case, the analogy is the biologicaltheory of evolutionformulated by Charles Darwin (in mid-19thcentury).
Each species of plant and animals has great individualvariance.
Darwin observed that those individuals with variations thatimpart a survival advantage through improved adaptation tothe environment are more likely to survive to the nextgeneration.
-
8/13/2019 14 Simulation Optimization
33/54
33
Genetic algorithms
This phenomenon is popularly known as survival of the fittest.
Modern researches in the field of genetics providesexplanation of the process of evolution and the naturalselection involved in the survival of fittest.
In any species that reproduces by sexual reproduction, eachoffspring inherits some of the chromosomes of each of the two
parents, where thegeneswithin the chromosomes determinethe individual features of the child.
A child who happens to inherit the better features of theparents is slightly more likely to survive into adulthood andthen become a parent who passes of these features to the nextgeneration.
-
8/13/2019 14 Simulation Optimization
34/54
34
Genetic algorithms
Thepopulation tends to improve slowly over time by theprocess.
A second factor that contributes to this process a random, low-level mutationrate in the DNA of the chromosomes.
Thus, a mutation occasionally occurs that changes the featuresof a chromosomes that a child inherits from a parent.
Although most mutations have no impact or aredisadvantageous, some mutations provide desirableimprovement.
Children with desirable mutations are slightly more likely tosurvive and contribute to the future gene pool of the species.
-
8/13/2019 14 Simulation Optimization
35/54
35
Genetic algorithms
These ideas transfer over to dealing with optimization problemin a rather natural way:
Feasible solutions for a particular problem correspond tomembers of a particular species, where fitness of each member
is measured by the value of the objective function. Rather than processing in a single trial solution at a time (as
we did for simulated annealing and tabu search), we now workwith an entire population of trial solutions.
For each iteration (generation) of a genetic algorithm, thecurrent population consists of the set of trial solutionscurrently under considerations.
-
8/13/2019 14 Simulation Optimization
36/54
36
Genetic algorithms
These current solutions are the currently living members of thespecies.
Some of the youngest members of the populations (speciallyincluding the fittest ones) survive into adulthood and become
parents (who are paired at random) to produce children.
Children are the new trial solutionswho share some of thefeatures (genes) of both parents.
Since the fittest members of the population are more likely tobecome parents than others, a genetic algorithm tends togenerate improving populations of trial solutions as it
proceeds. Mutationsoccasionally occur so that certain children also
acquire (sometimes, desirable) features that were notpossessed by either of the parent.
-
8/13/2019 14 Simulation Optimization
37/54
37
Genetic algorithms
This helps a genetic algorithm to explore a new, perhaps betterpart of feasible region than previously considered.
Eventually, survival of the fittest should tend to lead a geneticalgorithm to a trial solution that is at least nearly optimal.
Although the analogy of the process of biological evolutiondefines the core of any genetic algorithm, it is not necessary toadhere rigidly to this analogy in every detail.
For example, some genetic algorithms allow the same trialsolutions to be a parent repeatedly over multiple generations(iterations).
Thus, the analogy needs to be only a starting points fordefining the details of the algorithms to best fit the problemunder considerations.
-
8/13/2019 14 Simulation Optimization
38/54
38
Genetic algorithms
Outline of a basic genetic algorithm Initialization: Start with initial population of feasible trial
solutions, perhaps by generating them randomly. Evaluate thefitnessthe objective function valuefor each member of thiscurrent generation.
Iteration: Use a random process that is biased towards more fit members
of the current population to select some of its members tobecome parents.
Pair up the parents randomly and then have each pair of
parents give birth to two childrennew feasible solutionswhose features (genes) are a random mixture of the feature ofthe parents.
What if the random mixture of features and/or any mutationsresult in an infeasible solution?
-
8/13/2019 14 Simulation Optimization
39/54
39
Genetic algorithms
These cases are miscarriagesso the process of attempting to
give birth is repeated until a child is born that corresponds to a
feasible solution.
Retain the children and enough of the best members of the
current population to form the new population of the same sizeof the next iteration.
We discard the other members of the population.
Evaluate the fitness for each new member (the children) in the
new populations.
-
8/13/2019 14 Simulation Optimization
40/54
40
Genetic algorithms
Stopping rule:
Use some stopping rule, such as a fixed number of iterations, a
fixed amount of CPU time, or a fixed number of consecutive
iterations without any improvement in the best trial solutions
found so far.
Use the best trial solution found on any iteration as the final
solution.
-
8/13/2019 14 Simulation Optimization
41/54
41
Genetic algorithms
TSP example Population size: Ten. (Reasonable for small problems for
which software is designed. For large problems, bigger sizemay be required.)
Selection of parents:
From amongst the five most fit members (objective functionvalue), select four randomly to become parents.
From among the five least fit members select two randomly tobecome parents.
Pair up six parents randomly to form three couples.
Passage of features (genes) from parents to children: This isproblem dependent. It is explained later.
-
8/13/2019 14 Simulation Optimization
42/54
42
Genetic algorithms
TSP example
Mutation rate: The probability that an inherited feature of a
child mutates into an opposite feature is set at 0.1 in the
current algorithm.
Stopping rule: Stop after five consecutive iterations without
any improvement in the best trial solution found so far.
-
8/13/2019 14 Simulation Optimization
43/54
43
Genetic algorithms
TSP example
Our example is probably too simplistic (it only has about 10distinct feasible solutions (if we dont consider sequences inreverse order separate). Hence population size of 10 in such a
case wont be possible! We represent the solution by just the sequence in which cities
are represented. However, in most of the application of GA,typically the members of populations are coded so that it iseasier to generate children, create mutation etc.
First task is then to generate population for the initialgeneration.
-
8/13/2019 14 Simulation Optimization
44/54
44
Genetic algorithms
TSP example
Starting with home base city (1), random numbers are used toselect the next city from amongst those that have a link to thecity 1.
Same process is repeated to select the subsequent cities thatwould be visited in this tour (member).
We stop if all the cities are visited and we are back to thehome base city.
Or we reach a dead end (because there is no link from thecurrent city to any of the remaining cities that are still not inthe tour). In this case, we start the process all over again.
-
8/13/2019 14 Simulation Optimization
45/54
45
Genetic algorithms
TSP example
Random numbers are also used to generate children from
parents.
A childs genetic make-up should be completed comprised of
genes from its parents. (One exception is when the parent may
transfer tour-reversal to its child).
-
8/13/2019 14 Simulation Optimization
46/54
46
Genetic algorithms
TSP example: Procedure for generating a child
1. Initialization: To start, designate the home base city as the
current city.
2. Options for the next link: Identify all links out of the current
city not already in childs tour that are used by either parent
in either direction. Also, add any link that is needed to
complete a sub-tour reversal that the childs tour is making in
a portion of a parents tour.
3. Selection of next link: Use a random number to randomlyselect one of the options identified in Step 2.
-
8/13/2019 14 Simulation Optimization
47/54
47
Genetic algorithms
TSP example: Procedure for generating a child
4. Check for mutation: Randomly reject Step 3 to include anyother link from current city to a city not currently included inthe tour randomly.
5. Continuation: Add the link from step 3 or step 4 to the end ofchilds currently incomplete tour and re-designate the newlyadded city as the current city.
6. Completion: With only one city remaining that has not yetbeen added to the childs tour, add the link from the current
city to this remaining city. Then add the link from this lastcity back to the home base city to complete the tour for thechild.
-
8/13/2019 14 Simulation Optimization
48/54
48
Genetic algorithms
1. 1-2-4-6-5-3-7-1 (64)
2. 1-2-3-5-4-6-7-1 (65)
3. 1-7-5-6-4-2-3-1 (65)
4. 1-2-4-6-5-3-7-1 (64)
5. 1-3-7-6-5-4-2-1 (66)
6. 1-2-4-6-5-3-7-1 (64)
7. 1-7-6-4-5-3-2-1 (65)
8. 1-3-7-6-5-4-2-1 (69)
9. 1-7-6-4-5-3-2-1 (65)
10. 1-2-4-6-5-3-7-1 (64)
TSP example: Initialization step10 members of Generation
Zero generated randomly:
Notice that many members are repeated or are just obtained by
reversing the entire tour.
-
8/13/2019 14 Simulation Optimization
49/54
49
Genetic algorithms
TSP example: Initialization step
Notice that members 1, 4, 6 and 10 are the identical and so are
2, 7 and 9 (except for a difference of small sub-tour reversal).
Therefore the random generation of members yields only five
distinct solutions.
-
8/13/2019 14 Simulation Optimization
50/54
50
Genetic algorithms
1. 1-2-4-6-5-3-7-1 (64)
2. 1-2-4-6-5-3-7-1 (64)
3. 1-2-4-6-5-3-7-1 (64)
4. 1-2-4-6-5-3-7-1 (64)
5. 1-2-3-5-4-6-7-1 (65)
6. 1-7-5-6-4-2-3-1 (65)
7. 1-7-6-4-5-3-2-1 (65)
8. 1-7-6-4-5-3-2-1 (65)
9. 1-3-7-6-5-4-2-1 (66)
10. 1-3-7-6-5-4-2-1 (69)
TSP example: Initialization step10 members of Generation
Zero arranged in decreasing order of fitness.
We pick members 1, 2, 4, and 5 from the top five members and
pick members 6 and 7 from the bottom five to become parents.
-
8/13/2019 14 Simulation Optimization
51/54
-
8/13/2019 14 Simulation Optimization
52/54
52
Genetic algorithms
Parents: Members 4 and 5
1-2-4-6-5-3-7-1 (64)
1-3-7-6-5-4-2-1 (66)
Children:
1. 1-2-4-6-5-3-7-1 (64)2. 1-3-7-5-6-4-2-1 (66)
TSP example: Iteration 1Child birth in Generation One.
Cont.
-
8/13/2019 14 Simulation Optimization
53/54
53
Genetic algorithms
TSP example: Iteration 1Child birth in Generation One.
Four out of the six children generated are identical to one of itsparents.
Two of the children have better fitness than one of its parents,
but neither improved upon both of its parents. None of these children provide an optimal solution (which we
know to be the distance of 63).
This illustrates the fact that GA may require many generations(iterations) on some problems before the survival-of-the-fittest
phenomenon results in clearly superior populations.
-
8/13/2019 14 Simulation Optimization
54/54
54
Genetic algorithms
1. 1-2-4-5-6-7-3-1 (69)2. 1-2-4-6-5-3-7-1 (64)
3. 1-2-4-5-6-7-3-1 (69)
4. 1-7-6-4-5-3-2-1 (65)
5. 1-2-4-6-5-3-7-1 (64)
6. 1-3-7-5-6-4-2-1 (66)
7. 1-2-4-6-5-3-7-1 (64)8. 1-2-4-6-5-3-7-1 (64)
9. 1-2-4-6-5-3-7-1 (64)
10. 1-2-4-6-5-3-7-1 (64)
TSP example: Composition of Generation One Retain all the
children and carry-over four of the fittest members from
Generation Zero.