multi-objective optimization methods for optimal …...multi-objective optimization methods for...
TRANSCRIPT
Multi-Objective Optimization Methods for Optimal Funding Allocations to Mitigate Chemical and Biological Attacks
Roshan Rammohan, Ryan SchnalzerMahmoud Reda Taha, Tim Ross and Frank GilfeatherUniversity of New Mexico
Ram PrasadNew Mexico State University
OutlineOutlineIntroduction
MIDST: Exploration Mode
MIDST: Optimization mode
Alterative Optimization Methods
Case study
Conclusions
IntroductionIntroduction
What is the optimal budget $B and its distribution to N investment units in order to reduce the consequences of S number of CB events?
MIDST: Problem Statement
IntroductionIntroduction
$ $ $ $ $ $ $ $
$
Consequences “No Investment”
Total $ Budget
θ θ θ θ θ θ θ θ
Investment Units “N”
Cex
CB event Class
ExpectedConsequences
All possible individual CB events
“Effectiveness”
IntroductionIntroduction
MIDST: Exploration ModeMIDST: Exploration Mode
Data Cards
Interpolate
Investment unit Effectiveness
$X1
$X2
$XmFuse Abilities Overall
Effectiveness
LnL2L1
I1 InI2
Expected Consequences
Expectation
Soliciting Information: Data CardsSoliciting Information: Data Cards
Soliciting Information: Data CardsSoliciting Information: Data Cards
Establishing effectivity functionEstablishing effectivity functionPolynomial or spline interpolationMultivariate interpolation (See Prasad et al. tommorow!)
$X Funding
e (Effectivity)
Establishing effectivity functionEstablishing effectivity functionUsing this method we establish the matrix of effectivity
⎪⎪⎪
⎭
⎪⎪⎪
⎬
⎫
⎪⎪⎪
⎩
⎪⎪⎪
⎨
⎧
=
N,Si,S2,S1,S
N,mi,m2,m1,m
N,2i,22,21,2
N,1i,12,11,1
eeee
eeeeeeeeeeee
e
K
MMMMM
KK
KK
KK
For N: investment units and S: CB events
Fusing EffectivitiesFusing EffectivitiesConsidering the interaction between IUs on the final
consequences we have to fuse these effectivities
Many fusion operators exist. Example 2D fusion:
Very conservative Very optimistic
Expected ConsequencesExpected ConsequencesThe fusion operation results in
{ }fNS
fNm
fN3
fN2
fN1
fN eee,e,ee KK=For S: CB events
The expected consequence for each CB event can be computed as
( ) 0m
km
km Ce1C −=
For each CB eventConsidering the likelihoods of the CB events we can
compute the overall expected consequences as
∑==
S
1m
kmm
k CLCVector of consequences at $k investment
MIDST: Optimization ModeMIDST: Optimization Mode
Data Card
Interpolate
Investment unit Effectiveness
$X1
$X2
$Xm
Fuse Abilities Overall Effectiveness
Optimize and Rank Order
LnL2L1
I1 InI2
Expected Consequences
Expectation
Optimal Solution
Y
X
C
If we have a bimodal surface
Minimum
consequences
What does optimization mean?What does optimization mean?
We need to identify “x” that results in minimum “C”
Our optimization challenges are
- The surface of our function is not bimodal
-There might be many local minima
-There is more than one objective and they are not necessary achievable all together
- Computing time, space and accuracy resolution
- Practical interests
- Derivative based optimization- Gradient descent method- Levenberg Marquadrt- Many other
- Non-derivative based optimization- Genetic algorithms- Simulated annealing- Many other
MethodsMethods- To address the risk associated with the previously listed concerns/challenges, a group of optimization methods was examined
Genetic Algorithms (GA) mimics laws of Natural Evolution which emphasizes “survival of the fittest”.
In GA a “population” that contains different possible solutions to the problem is created.
DerivativeDerivative--free optimizationfree optimization
Genetic Algorithms Genetic Algorithms
1001011001100010101001001001100101111101
. . .
. . .
. . .
. . .
Initialgeneration
1001011001100010101001001001110101111001
. . .
. . .
. . .
. . .
Nextgeneration
The process is repeated until evolution happens“a solution is found!”
Selection
Crossover
Mutation
Elitism
MultiMulti--Objective OptimizationObjective Optimization- It is practical to assume that the decision maker might have priorities on the different objectives casualties/mission disruption and time to recover.
-In this case, usually there exist more than one optimal solution to the problem (Named Pareto solution)
- Based on the preferences, these solutions can be rank ordered.
MultiMulti--Objective OptimizationObjective Optimization
- Three major issues differentiate between single and multi- objective optimizations
- Multiple (three) goals instead of one
- Dealing with multiple search spaces not one
- Artificial fixes affect results
- We are looking for a set of Pareto-optimal solutions
MultiMulti--Objective OptimizationObjective Optimization
Pareto Optimal Solutions
Domain of All Feasible Solutions
Casualties (Objective 1)
Mis
sion
Dis
rupt
ion
(Obj
ectiv
e 2)
- Global criteria method- Require target values for the functions- Can incorporate weights for preferences
- Hierarchical optimization method- Optimize the top priority function- Specify constraints to prevent deteriorating the optimized function
- Multi-Objective Genetic Optimization (MOGA)- Non-dominated Sorting Genetic Algorithm
MultiMulti--Objective Optimization MethodsObjective Optimization Methods
MultiMulti--Objective OptimizationObjective OptimizationHierarchical Method
- Rank order the objective functions
-The j-1 function is used as constraint in optimizing the jth function.
)x(f100
11)x(f 1j
1jj
1j−
−− ⋅⎟⎟⎠
⎞⎜⎜⎝
⎛ −±≤Σ
- is a lexicographic increment %
− Ηοw much error is allowed in losing optimal solution for (j-1) given more optimization in (j)
jΣ
MultiMulti--Objective OptimizationObjective OptimizationGlobal Criterion
- The threshold vector is defined by
w can also be implemented to represent preferences as weights
[ ]0k
03
02
01
0i ff,f,ff K=
∑=
⎟⎟⎠
⎞⎜⎜⎝
⎛ −=
k
i
P
i
iii f
xffwxf1
0
0 )()(
P is integer 1 or 2
MultiMulti--Objective OptimizationObjective OptimizationNon-dominated Sorting Genetic Algorithm (NSGA)
- While similar to GA, NSGA sorts the population according to non-domination principles.
- Population is classified into a number of mutually exclusive classes
- Highest fitness is assigned to class that are closest to the Pareto-optimal front
- The use of non-dominated sorting allows diversity to solutions and thus guarantees reaching the Pareto-front.
-NSGA also includes elitism principles which allows it to find higher number of Pareto-solutions.
Selection
CrossoverMutation
Parent Set (Pi)MIDST
SimulationMulti-Objective
Fitness Evaluation
Parent Subset (Pi)
Offspring Population Set (Oi)
MID
ST
Sim
ula
tion
Pi
Oi
Rejection
Cro
wdi
ng S
ortin
g
Par
ent
Set
(Pi+
1 )
Non
-Dom
inat
ed
Sort
ing
S1i
S2i
S4i
S3i
S5i
S6i
NSGANSGA--IIII
Merits and shortcomingsMerits and shortcomings- Derivative based
- If the space is continuum, it converges very fast and an optimal solution is guaranteed- If too many local minima exist, the algorithm might be trapped and cannot find global minima
- Non-derivative based- If the space is non-continuum, GA will be able to find the solution- Whether local minima exist or not, it will converge.- GA is better equipped with some aiding optimization technique to narrow search domain
Reduction in Mission Disruption
Reduction on Recovery Time
Reduction in Casualties
Case studyCase study- For a given group of data cards and inputs we identified
Case studyCase study- For a given group of data cards and inputs we identified
0
50
100
150
200
250
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
Investment Units
$M o
ver 1
0 ye
ars
Base Optimal
Case studyCase study- At the optimal level, we can identify the funding portfolio
Portfolio for Base Funding C1 = 21, C2 = 21. C3 = 42
Portfolio for Optimal Funding C1 = 11, C2 = 12. C3 = 12
-We demonstrated the possible use of multi-objective genetic optimization for allocation of funding for investment units to reduce consequences of CB events
- Classical gradient based versus gradient free optimization techniques have been examined in search for Pareto solutions
- The presented work is part of MIDST: A robust mathematical framework that can be used to help decision makers for funding allocations considering multiple objectives and priorities
Research is currently on-going to integrate fuzzy rank ordering module as part of the optimization process.
ConclusionsConclusions
This research is funded by
Defense Threat Reduction Agency (DTRA) Strategic Partnership Program.
The authors gratefully acknowledge this funding.
AcknowledgmentAcknowledgment
Questions
DerivativeDerivative--based optimizationbased optimization
gGoldnew ηθθ +=
Gradient descent method- Assumes continuous and differentiable function
-g is the derivative of the objective function
- G is a positive definite matrix
- η
is the step size
( ) ( ) ( ) ( ) T
n21
E....EEE)(g ⎥⎦
⎤⎢⎣
⎡∂∂
∂∂
∂∂
=∇=θθ
θθ
θθθθ
DerivativeDerivative--based optimizationbased optimization
( ) gIH 1oldnew
−+−= ληθθ
Levenberg-Marquardt (LM) method
- A modified version of classical Newton’s method. It also assumes continuous and differentiable function
- g is the gradient, I is the identity matrix, λ is some nonnegative value and H is the Hessian matrix
− η is the step size as defined before
( ) ( ) ( ) ( ) T
2n
2
22
2
21
22 E....EEE)(H ⎥
⎦
⎤⎢⎣
⎡
∂∂
∂∂
∂∂
=∇=θθ
θθ
θθθθ
Levenberg-Marquardt
Newton
Steepest Descent
incre
asing
decr
easi
ng
Func
tion
cont
our
Tangent
λ
DerivativeDerivative--based optimizationbased optimization