mae 552 heuristic optimization instructor: john eddy lecture #35 4/26/02 multi-objective...

33
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Post on 19-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

MAE 552 Heuristic Optimization

Instructor: John Eddy

Lecture #35

4/26/02

Multi-Objective Optimization

Page 2: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

References:

Das, I, and Dennis, J, A Closer Look at Drawbacks of Minimizing Weighted Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems, Can be found at http://ublib.buffalo.edu/.

Chen. W, Wiecek, M, Zhang, J, Quality Utility – A Compromise Programming Approach to Robust Design, ASME JMD, 1999, Vol 121, pp.179-187

Page 3: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

• All the problems that we have considered in this class as well as in 550 have been comprised of a single objective function with perhaps multiple constraints and design variables.

Minimize

Subject To:

)(xF

....0)( xg

Page 4: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

In such a case, the problem has a 1 dimensional performance space and the optimum point is the one that is the furthest toward the desired extreme.

F-0+

Optimum

Page 5: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

• What happens when it is necessary (or at least desirable) to optimize with respect to more than one criteria?

• Now we have additional dimensions in our performance space and we are seeking the best we can get for all dimensions simultaneously.

• What does that mean “best in all dimensions”?

Page 6: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

Consider the following 2D performance space:

F1

F2 Minimize Both F’s

Optimum

Page 7: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

But what happens in a case like this:

F1

F2 Minimize Both F’s

Optimum?

Optimum?

Page 8: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

The one on the left is better with respect to F1 but worse with respect to F2.

And the one on the right is better with respect to F2 and worse with respect to F1.

How does one wind up in such peril?

Page 9: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

That depends on the relationships that exist between the various objectives.

There are 3 possible interactions that may exist between objectives in a multi-objective optimization problem:

1. Cooperation

2. Competition

3. No Relationship

Page 10: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

What defines a relationship between objectives? How can I recognize that two objectives have any relationship at all?

The relationship between two objectives is defined by the variables that they have in common.

Two objectives will fight for control of common design variables throughout a multi-objective design optimization process.

Page 11: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective OptimizationJust how vicious the fight is depends on what type of interaction exists (of the 3 we mentioned).

Let’s consider the 1st case of cooperation.

Two objectives are said to “cooperate” if they both wish to drive all their common variables in the same direction (pretty much all the time).

In such a case, betterment of one objective typically accompanies betterment of the other.

Page 12: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

In such a case, the optimum is a single point (or collection of equally desirable points) like in our first performance plot.

F1

F2 Minimize Both F’s

Optimum

Page 13: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective OptimizationNow let’s consider the 2nd case of competition.

Two objectives are said to “compete” if they wish to drive at least some of their common variables in different directions.

In such a case, betterment of one objective typically comes at the expense of the other.

This is the most interesting case.

Page 14: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

In such a case, the optimum is no longer a single point but a collection of points called the Pareto Set.

Named for Vilfredo Pareto (1848-1923) who was a French economist and sociologist.

He established the concept now known as “Pareto Optimality”.

Page 15: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

• Pareto optimality - • Optimality criterion for optimization problems with multiple objectives. A state (set of parameters) is said to be Pareto optimal if there is no other state dominating the state with respect to a set of objective functions.

–State A dominates state B if A is better than B in at least one objective function and not worse with respect to all other objective functions.

Page 16: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

So let’s take a look at this:

F1

F2 Minimize Both F’s

Page 17: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective OptimizationFor completeness, we will now consider the case in which there is no relationship between two objectives.

When do you think such a thing might occur?

Clearly this only occurs when the two objectives have no design variables in common (each is a function of a different subset of the design variables and the 2 subsets have a null intersection).

Page 18: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective OptimizationIn such a case, we are free to optimize each function individually to determine our optimal design configuration.

That is why this case is desirable but uninteresting.

So back to competing objectives.

Page 19: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

Now that we know what we are looking for, that is, the set of non-dominated designs, how are we going to go about generating it?

The most common way to generate points along a Pareto frontier is to use a weighted sum approach.

Consider the following example:

Page 20: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

Suppose I wish to minimize both of the following functions simultaneously:

F1 = 750x1+60(25-x1) x2+45(25- x1)(25- x2)

F2 = (25- x1) x2

For the typical weighted sum approach, I would assign a weight to each function such that:

0,1 2121 wwandww

Page 21: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

I would then combine the two functions into a single function as follows and solve:

2211 FwFw

FwFi

iiT

Page 22: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

The net effect of our weighted sum approach is to convert a multiple objective problem into a single objective problem.

But this will only provide us with a single Pareto point. How will be go about finding other Pareto points?

By altering the weights and solving again.

Page 23: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

As mentioned, such schemes are very common in multi-objective optimization.

In fact, in an ASME paper published in 1997, Dennis and Das made the claim that all common methods of generating Pareto points involved repeated conversion of a multi-objective problem into a single objective problem and solving.

Page 24: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

Ok, so I march up and down my weights generating Pareto points and then I’ve got a good representation of my set.

Unfortunately not. As it turns out it is seldom this easy. There are a number of pitfalls associated with using weighted sums to generate Pareto points.

Page 25: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

Some of those pitfalls are:• Inability to generate points in non-convex portions of the frontier

• Inability to generate a uniform sampling of the frontier

• A non-intuitive relationship between combinatorial parameters (weights, etc.) and performances

• Poor efficiency (can require an excessive number of function evaluations).

Page 26: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

Let’s consider the 1st pitfall:

What is a non-convex portion of the frontier?

I assume you are all familiar with the concept of convexity so let’s move on to a pictorial.

Page 27: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

F1

F2 Minimize Both F’s

This is a non-convex region of the frontier

Page 28: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

Ok so why do weighted sum approaches have difficulty finding these points?

As discussed in reference 1, choosing the weights in the manner that we have can be shown to be equivalent to rotating the performance axes by an angle that can be determined from the weights and then translating those rotated axes until they hit the frontier.

The effect of this on a convex frontier can be visualized as follows.

Page 29: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

F1

F2 Minimize Both F’s

Page 30: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

So I think that you can see already what is going to happen when the frontier is not convex.

Consider the following animation.

Page 31: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

F1

F2 Minimize Both F’s

Page 32: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

So we missed all the points in the non-convex region.

This also demonstrates one reason why we may not get a uniform sampling of the Pareto frontier.

As it turns out, a uniform sampling is only possible in this way for a Pareto set having a very specific shape. So not even all convex Pareto sets can be sampled uniformly in this fashion. You can read more about this in reference 1.

Page 33: MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization

Multi-Objective Optimization

Clearly, if we cannot generate a uniform sampling and we cannot find non-convex regions, then the relationship between changes in weights and motion along the frontier is non-intuitive.

Finally, since with each combination of weights, we are completing an entire optimization of our system, You can see how this may result in a great deal of system evaluations.