optimization methods (lecture india)
TRANSCRIPT
-
7/28/2019 Optimization Methods (Lecture India)
1/267
Optimization Methods: Introduction and Basic Concepts - Learning Objectives
Module 1: Introduction and Basic Concepts
Learning Objectives
Optimization is the act of obtaining the best result under the given circumstances. In design
construction and maintenance of any engineering system, many technological and managerial
decisions have to be taken at several stages. The ultimate goal of all such decisions is either
to minimize the effort required or to maximize the desired benefit. Hence optimization can be
defined as the process of finding the conditions that give the minimum or maximum value of
a function, where the function represents the effort required or the desired benefit.
This module starts with a glance through the historical development of optimization methods.
Engineering applications of optimizations are scanned through from which one would get a
broad picture of the multitude of applications, the optimization methods have. The Art of
modeling is briefly explained with the various phases involved in modeling. In the second
lecture various components of the Optimization problem are discussed and summarized with
steps involved in formulating a mathematical programming problem. In the third lecture the
optimization problems are classified under various criteria to enable choosing an appropriate
model applicable to different types of optimization problems. In the final lecture a briefintroduction to the classical and advanced optimization techniques in use are given.
At the end of the module the reader will be able to
1. Understand the need and origin of the optimization methods.2. Get a broader picture of the various applications of optimization methods used in
engineering.
3. Define an optimization problem and its various components.4. Formulate optimization problems as mathematical programming problems.5. Classify optimization problems to suitably choose the method needed to solve the
particular type of problem.
6. Briefly learn about classical and advanced techniques in optimizations.
D Nagesh Kumar, IISc, Bangalore M1LO
-
7/28/2019 Optimization Methods (Lecture India)
2/267
Optimization Methods: Introduction and Basic Concepts 1
Module 1 Lecture Notes 1
Historical Development and Model Building
Introduction
In this lecture, historical development of optimization methods is glanced through. Apart
from the major developments, some recently developed novel approaches, such as, goal
programming for multi-objective optimization, simulated annealing, genetic algorithms, and
neural network methods are briefly mentioned tracing their origin. Engineering applications
of optimization with different modeling approaches are scanned through from which one
would get a broad picture of the multitude applications of optimization techniques.
Historical Development
The existence of optimization methods can be traced to the days of Newton, Lagrange, and
Cauchy. The development of differential calculus methods for optimization was possible
because of the contributions of Newton and Leibnitz to calculus. The foundations of calculus
of variations, which deals with the minimization of functions, were laid by Bernoulli, Euler,
Lagrange, and Weistrass. The method of optimization for constrained problems, which
involve the addition of unknown multipliers, became known by the name of its inventor,
Lagrange. Cauchy made the first application of the steepest descent method to solve
unconstrained optimization problems. By the middle of the twentieth century, the high-speed
digital computers made implementation of the complex optimization procedures possible and
stimulated further research on newer methods. Spectacular advances followed, producing a
massive literature on optimization techniques. This advancement also resulted in the
emergence of several well defined new areas in optimization theory.
Some of the major developments in the area of numerical methods of unconstrained
optimization are outlined here with a few milestones.
Development of the simplex method by Dantzig in 1947 for linear programmingproblems
The enunciation of the principle of optimality in 1957 by Bellman for dynamicprogramming problems,
D Nagesh Kumar, IISc, Bangalore M1L1
-
7/28/2019 Optimization Methods (Lecture India)
3/267
Optimization Methods: Introduction and Basic Concepts 2
Work by Kuhn and Tucker in 1951 on the necessary and sufficient conditions for theoptimal solution of programming problems laid the foundation for later research in
non-linear programming.
The contributions of Zoutendijk and Rosen to nonlinear programming during the early1960s have been very significant.
Work of Carroll and Fiacco and McCormick facilitated many difficult problems to besolved by using the well-known techniques of unconstrained optimization.
Geometric programming was developed in the 1960s by Duffin, Zener, and Peterson. Gomory did pioneering work in integer programming, one of the most exciting and
rapidly developing areas of optimization. The reason for this is that most real world
applications fall under this category of problems.
Dantzig and Charnes and Cooper developed stochastic programming techniques andsolved problems by assuming design parameters to be independent and normally
distributed.
The necessity to optimize more than one objective or goal while satisfying the physical
limitations led to the development of multi-objective programming methods. Goal
programming is a well-known technique for solving specific types of multi-objective
optimization problems. The goal programming was originally proposed for linear problems
by Charnes and Cooper in 1961. The foundation of game theory was laid by von Neumann in
1928 and since then the technique has been applied to solve several mathematical, economic
and military problems. Only during the last few years has game theory been applied to solve
engineering problems.
Simulated annealing, genetic algorithms, and neural network methods represent a new class
of mathematical programming techniques that have come into prominence during the last
decade. Simulated annealing is analogous to the physical process of annealing of metals and
glass. The genetic algorithms are search techniques based on the mechanics of natural
selection and natural genetics. Neural network methods are based on solving the problem
using the computing power of a network of interconnected neuron processors.
D Nagesh Kumar, IISc, Bangalore M1L1
-
7/28/2019 Optimization Methods (Lecture India)
4/267
Optimization Methods: Introduction and Basic Concepts 3
Engineering applications of optimization
To indicate the widespread scope of the subject, some typical applications in different
engineering disciplines are given below.
Design of civil engineering structures such as frames, foundations, bridges, towers,chimneys and dams for minimum cost.
Design of minimum weight structures for earth quake, wind and other types ofrandom loading.
Optimal plastic design of frame structures (e.g., to determine the ultimate momentcapacity for minimum weight of the frame).
Design of water resources systems for obtaining maximum benefit. Design of optimum pipeline networks for process industry. Design of aircraft and aerospace structure for minimum weight Finding the optimal trajectories of space vehicles. Optimum design of linkages, cams, gears, machine tools, and other mechanical
components.
Selection of machining conditions in metal-cutting processes for minimizing theproduct cost.
Design of material handling equipment such as conveyors, trucks and cranes forminimizing cost.
Design of pumps, turbines and heat transfer equipment for maximum efficiency. Optimum design of electrical machinery such as motors, generators and transformers. Optimum design of electrical networks. Optimum design of control systems. Optimum design of chemical processing equipments and plants. Selection of a site for an industry. Planning of maintenance and replacement of equipment to reduce operating costs. Inventory control. Allocation of resources or services among several activities to maximize the benefit. Controlling the waiting and idle times in production lines to reduce the cost of
production. Planning the best strategy to obtain maximum profit in the presence of a competitor.
D Nagesh Kumar, IISc, Bangalore M1L1
-
7/28/2019 Optimization Methods (Lecture India)
5/267
Optimization Methods: Introduction and Basic Concepts 4
Designing the shortest route to be taken by a salesperson to visit various cities in asingle tour.
Optimal production planning, controlling and scheduling.
Analysis of statistical data and building empirical models to obtain the most accuraterepresentation of the statistical phenomenon.
However, the list is incomplete.
Art of Modeling: Model Building
Development of an optimization model can be divided into five major phases.
Data collection Problem definition and formulation Model development Model validation and evaluation of performance Model application and interpretation
Data collection may be time consuming but is the fundamental basis of the model-building
process. The availability and accuracy of data can have considerable effect on the accuracy of
the model and on the ability to evaluate the model.
The problem definition and formulation includes the steps: identification of the decision
variables; formulation of the model objective(s) and the formulation of the model constraints.
In performing these steps the following are to be considered.
Identify the important elements that the problem consists of. Determine the number of independent variables, the number of equations required to
describe the system, and the number of unknown parameters.
Evaluate the structure and complexity of the model Select the degree of accuracy required of the model
Model development includes the mathematical description, parameter estimation, input
development, and software development. The model development phase is an iterative
process that may require returning to the model definition and formulation phase.
The model validation and evaluationphase is checking the performance of the model as a
whole. Model validation consists of validation of the assumptions and parameters of the
D Nagesh Kumar, IISc, Bangalore M1L1
-
7/28/2019 Optimization Methods (Lecture India)
6/267
Optimization Methods: Introduction and Basic Concepts 5
model. The performance of the model is to be evaluated using standard performance
measures such as Root mean squared error and R2 value. A sensitivity analysis should be
performed to test the model inputs and parameters. This phase also is an iterative process and
may require returning to the model definition and formulation phase. One important aspect of
this process is that in most cases data used in the formulation process should be different
from that used in validation. Another point to keep in mind is that no single validation
process is appropriate for all models.
Model application and implementation include the use of the model in the particular area
of the solution and the translation of the results into operating instructions issued in
understandable form to the individuals who will administer the recommended system.
Different modeling techniques are developed to meet the requirements of different types of
optimization problems. Major categories of modeling approaches are: classical optimization
techniques, linear programming, nonlinear programming, geometric programming, dynamic
programming, integer programming, stochastic programming, evolutionary algorithms, etc.
These modeling approaches will be discussed in subsequent modules of this course.
D Nagesh Kumar, IISc, Bangalore M1L1
-
7/28/2019 Optimization Methods (Lecture India)
7/267
Optimization Methods: Introduction and Basic concepts 1
Module 1 Lecture Notes 2
Optimization Problem and Model Formulation
Introduction
In the previous lecture we studied the evolution of optimization methods and their
engineering applications. A brief introduction was also given to the art of modeling. In this
lecture we will study the Optimization problem, its various components and its formulation as
a mathematical programming problem.
Basic components of an optimization problem:
An objective function expresses the main aim of the modelwhich is either to be minimized
or maximized. For example, in a manufacturing process, the aim may be to maximize the
profitorminimize the cost. In comparing the data prescribed by a user-defined model with the
observed data, the aim is minimizing the total deviation of the predictions based on the model
from the observed data. In designing a bridge pier, the goal is to maximize the strength and
minimize size.
A set of unknowns or variables control the value of the objective function. In the
manufacturing problem, the variables may include the amounts of different resources usedor
the time spent on each activity. In fitting-the-data problem, the unknowns are the parameters
of the model. In the pier design problem, the variables are the shape and dimensions of the
pier.
A set of constraints are those which allow the unknowns to take on certain values but
exclude others. In the manufacturing problem, one cannot spend negative amount of time on
any activity, so one constraint is that the "time" variables are to be non-negative. In the pier
design problem, one would probably want to limit the breadthof the base and to constrain its
size.
The optimization problem is then to find values of the variables that minimize or maximize
the objective function while satisfying the constraints.
Objective Function
As already stated, the objective function is the mathematical function one wants to maximize
or minimize, subject to certain constraints. Many optimization problems have a single
D Nagesh Kumar, IISc, Bangalore M1L2
-
7/28/2019 Optimization Methods (Lecture India)
8/267
Optimization Methods: Introduction and Basic concepts 2
objective function. (When they don't they can often be reformulated so that they do) The two
exceptions are:
No objective function. In some cases (for example, design of integrated circuitlayouts), the goal is to find a set of variables that satisfies the constraints of the model.
The user does not particularly want to optimize anything and so there is no reason to
define an objective function. This type of problems is usually called a feasibility
problem.
Multiple objective functions. In some cases, the user may like to optimize a number ofdifferent objectives concurrently. For instance, in the optimal design of panel of a
door or window, it would be good to minimize weight and maximize strength
simultaneously. Usually, the different objectives are not compatible; the variables that
optimize one objective may be far from optimal for the others. In practice, problems
with multiple objectives are reformulated as single-objective problems by either
forming a weighted combination of the different objectives or by treating some of the
objectives as constraints.
Statement of an optimization problem
An optimization or a mathematical programming problem can be stated as follows:
To find X = whichminimizesf(X) (1.1)
nx
x
x
.
.
2
1
Subject to the constraints
gi(X) 0 , i = 1, 2, ., m
lj(X) 0= , j = 1, 2, .,p
where X is an n-dimensional vector called the design vector, f(X) is called the objective
function, and gi(X) and lj(X) are known as inequality and equality constraints, respectively.
The number of variables n and the number of constraints m and/orp need not be related in
any way. This type problem is called a constrained optimization problem.
D Nagesh Kumar, IISc, Bangalore M1L2
-
7/28/2019 Optimization Methods (Lecture India)
9/267
Optimization Methods: Introduction and Basic concepts 3
If the locus of all points satisfying f(X) = a constant c, is considered, it can form a family of
surfaces in the design space called the objective function surfaces. When drawn with the
constraint surfaces as shown in Fig 1 we can identify the optimum point (maxima). This is
possible graphically only when the number of design variables is two. When we have three or
more design variables because of complexity in the objective function surface, we have to
solve the problem as a mathematical problem and this visualization is not possible.
.
Optimumpoint
f = C3
f = C2
f= C4
f = C5
C1 > C2 >C3 >C4 ..> Cn
f = C1
Fig 1
Optimization problems can be defined without any constraints as well.
To find X = whichminimizesf(X) (1.2)
nx
x
x
.
.
2
1
Such problems are called unconstrained optimization problems. The field of unconstrained
optimization is quite a large and prominent one, for which a lot of algorithms and software
are available.
D Nagesh Kumar, IISc, Bangalore M1L2
-
7/28/2019 Optimization Methods (Lecture India)
10/267
Optimization Methods: Introduction and Basic concepts 4
VariablesThese are essential. If there are no variables, we cannot define the objective function and the
problem constraints. In many practical problems, one cannot choose the design variable
arbitrarily. They have to satisfy certain specified functional and other requirements.
Constraints
Constraints are not essential. It's been argued that almost all problems really do have
constraints. For example, any variable denoting the "number of objects" in a system can only
be useful if it is less than the number of elementary particles in the known universe! In
practice though, answers that make good sense in terms of the underlying physical or
economic criteria can often be obtained without putting constraints on the variables.
Design constraints are restrictions that must be satisfied to produce an acceptable design.
Constraints can be broadly classified as:
1) Behavioral or Functional constraints: These represent limitations on the behaviorperformance of the system.
2) Geometric or Side constraints: These represent physical limitations on designvariables such as availability, fabricability, and transportability.
For example, for the retaining wall design shown in the Fig 2, the base width W cannot be
taken smaller than a certain value due to stability requirements. The depth D below the
ground level depends on the soil pressure coefficients Ka and Kp. Since these constraints
depend on the performance of the retaining wall they are called behavioral constraints. The
number of anchors provided along a cross section Ni cannot be any real number but has to be
a whole number. Similarly thickness of reinforcement used is controlled by supplies from the
manufacturer. Hence this is a side constraint.
D Nagesh Kumar, IISc, Bangalore M1L2
-
7/28/2019 Optimization Methods (Lecture India)
11/267
Optimization Methods: Introduction and Basic concepts 5
D
Ni no. of anchors
W
Fig. 2
Constraint Surfaces
Consider the optimization problem presented in eq. 1.1 with only the inequality constraint
gi(X) . The set of values of X that satisfy the equation g0 i(X) 0 forms a boundary surface
in the design space called a constraint surface. This will be a (n-1) dimensional subspace
where n is the number of design variables. The constraint surface divides the design space
into two regions: one with gi(X) (feasible region) and the other in which g0< i(X) > 0
(infeasible region). The points lying on the hyper surface will satisfy gi(X) =0. The collection
of all the constraint surfaces gi(X) = 0,j= 1, 2, , m, which separates the acceptable region is
called the composite constraint surface.
Fig 3 shows a hypothetical two-dimensional design space where the feasible region is
denoted by hatched lines. The two-dimensional design space is bounded by straight lines as
shown in the figure. This is the case when the constraints are linear. However, constraints
may be nonlinear as well and the design space will be bounded by curves in that case. A
design point that lies on more than one constraint surface is called a bound point, and the
associated constraint is called an active constraint. Free points are those that do not lie on any
constraint surface. The design points that lie in the acceptable or unacceptable regions can be
classified as following:
1. Free and acceptable point2. Free and unacceptable point
D Nagesh Kumar, IISc, Bangalore M1L2
-
7/28/2019 Optimization Methods (Lecture India)
12/267
Optimization Methods: Introduction and Basic concepts 6
3. Bound and acceptable point4. Bound and unacceptable point.
Examples of each case are shown in Fig. 3.
Fig. 3
Boundunacceptable
point.
Behaviorconstraint
g2 0
.
Infeasible
region
Feasibleregion
Behaviorconstraint
g1 0
Side
constraintg3 0
Bound
acceptable point.
.
Free acceptable
pointFree unacceptable
point
Formulation of design problems as mathematical programming problems
In mathematics, the term optimization, ormathematical programming, refers to the study
of problems in which one seeks to minimize or maximize a real function by systematically
choosing the values of real or integer variables from within an allowed set. This problem can
be represented in the following way
Given: a functionf:A R from some setA to the real numbers
Sought: an element x0 inA such that f(x0) f(x) for allx inA ("minimization") or such that
f(x0) f(x) for allx inA ("maximization").
Such a formulation is called an optimization problem or a mathematical programming
problem (a term not directly related to computer programming, but still in use for example,
D Nagesh Kumar, IISc, Bangalore M1L2
-
7/28/2019 Optimization Methods (Lecture India)
13/267
Optimization Methods: Introduction and Basic concepts 7
in linear programming (see module 3)). Many real-world and theoretical problems may be
modeled in this general framework.
Typically,A is some subset of the Euclidean space Rn, often specified by a set ofconstraints,
equalities or inequalities that the members ofA have to satisfy. The elements ofA are called
candidate solutions orfeasible solutions. The functionfis called an objective function, orcost
function. A feasible solution that minimizes (or maximizes, if that is the goal) the objective
function is called an optimal solution. The domainA offis called the search space.
Generally, when the feasible region or the objective function of the problem does not present
convexity (refer module 2), there may be several local minima and maxima, where a local
minimum x*
is defined as a point for which there exists some > 0 so that for all x such that
;
and
that is to say, on some region around x* all the function values are greater than or equal to the
value at that point. Local maxima are defined similarly.
A large number of algorithms proposed for solving non-convex problems including themajority of commercially available solvers are not capable of making a distinction between
local optimal solutions and rigorous optimal solutions, and will treat the former as the actual
solutions to the original problem. The branch of applied mathematics and numerical analysis
that is concerned with the development of deterministic algorithms that are capable of
guaranteeing convergence in finite time to the actual optimal solution of a non-convex
problem is called global optimization.
Problem formulation
Problem formulation is normally the most difficult part of the process. It is the selection of
design variables, constraints, objective function(s), and models of the discipline/design.
Selection of design variables
A design variable, that takes a numeric or binary value, is controllable from the point of view
of the designer. For instance, the thickness of a structural member can be considered a design
variable. Design variables can be continuous (such as the length of a cantilever beam),
D Nagesh Kumar, IISc, Bangalore M1L2
-
7/28/2019 Optimization Methods (Lecture India)
14/267
Optimization Methods: Introduction and Basic concepts 8
discrete (such as the number of reinforcement bars used in a beam), or Boolean. Design
problems with continuous variables are normally solved more easily.
Design variables are often bounded, that is, they have maximum and minimum values.
Depending on the adopted method, these bounds can be treated as constraints or separately.
Selection of constraints
A constraint is a condition that must be satisfied to render the design to be feasible. An
example of a constraint in beam design is that the resistance offered by the beam at points of
loading must be equal to or greater than the weight of structural member and the load
supported. In addition to physical laws, constraints can reflect resource limitations, user
requirements, or bounds on the validity of the analysis models. Constraints can be used
explicitly by the solution algorithm or can be incorporated into the objective, by using
Lagrange multipliers.
Objectives
An objective is a numerical value that is to be maximized or minimized. For example, a
designer may wish to maximize profit or minimize weight. Many solution methods work only
with single objectives. When using these methods, the designer normally weights the various
objectives and sums them to form a single objective. Other methods allow multi-objectiveoptimization (module 8), such as the calculation of a Pareto front.
Models
The designer has to also choose models to relate the constraints and the objectives to the
design variables. These models are dependent on the discipline involved. They may be
empirical models, such as a regression analysis of aircraft prices, theoretical models, such as
from computational fluid dynamics, or reduced-order models of either of these. In choosing
the models the designer must trade-off fidelity with the time required for analysis.
The multidisciplinary nature of most design problems complicates model choice and
implementation. Often several iterations are necessary between the disciplines analyses in
order to find the values of the objectives and constraints. As an example, the aerodynamic
loads on a bridge affect the structural deformation of the supporting structure. The structural
deformation in turn changes the shape of the bridge and hence the aerodynamic loads. Thus,
it can be considered as a cyclic mechanism. Therefore, in analyzing a bridge, the
D Nagesh Kumar, IISc, Bangalore M1L2
-
7/28/2019 Optimization Methods (Lecture India)
15/267
Optimization Methods: Introduction and Basic concepts 9
aerodynamic and structural analyses must be run a number of times in turn until the loads and
deformation converge.
Representation in standard form
Once the design variables, constraints, objectives, and the relationships between them have
been chosen, the problem can be expressed as shown in equation 1.1
Maximization problems can be converted to minimization problems by multiplying the
objective by -1. Constraints can be reversed in a similar manner. Equality constraints can be
replaced by two inequality constraints.
Problem solution
The problem is normally solved choosing the appropriate techniques from those available in
the field of optimization. These include gradient-based algorithms, population-based
algorithms, or others. Very simple problems can sometimes be expressed linearly; in that case
the techniques of linear programming are applicable.
Gradient-based methods
Newton's method Steepest descent Conjugate gradient Sequential quadratic programming
Population-based methods
Genetic algorithms Particle swarm optimization
Other methods
Random search Grid search Simulated annealing
Most of these techniques require large number of evaluations of the objectives and the
constraints. The disciplinary models are often very complex and can take significant amount
of time for a single evaluation. The solution can therefore be extremely time-consuming.
D Nagesh Kumar, IISc, Bangalore M1L2
-
7/28/2019 Optimization Methods (Lecture India)
16/267
Optimization Methods: Introduction and Basic concepts 10
Many of the optimization techniques are adaptable to parallel computing. Much of the current
research is focused on methods of decreasing the computation time.
The following steps summarize the general procedure used to formulate and solve
optimization problems. Some problems may not require that the engineer follow the steps in
the exact order, but each of the steps should be considered in the process.
1) Analyze the process itself to identify the process variables and specific characteristicsof interest, i.e., make a list of all the variables.
2) Determine the criterion for optimization and specify the objective function in terms ofthe above variables together with coefficients.
3) Develop via mathematical expressions a valid process model that relates the input-output variables of the process and associated coefficients. Include both equality and
inequality constraints. Use well known physical principles such as mass balances,
energy balance, empirical relations, implicit concepts and external restrictions.
Identify the independent and dependent variables to get the number of degrees of
freedom.
4) If the problem formulation is too large in scope: break it up into manageable parts, or simplify the objective function and the model
5) Apply a suitable optimization technique for mathematical statement of the problem.6) Examine the sensitivity of the result, to changes in the values of the parameters in the
problem and the assumptions.
D Nagesh Kumar, IISc, Bangalore M1L2
-
7/28/2019 Optimization Methods (Lecture India)
17/267
Optimization Methods: Introduction and Basic Concepts
D Nagesh Kumar, IISc, Bangalore
1
M1L3
Module 1 Lecture Notes 3
Classification of Optimization Problems
Introduction
In the previous lecture we studied the basics of an optimization problem and its formulation
as a mathematical programming problem. In this lecture we look at the various criteria for
classification of optimization problems.
Optimization problems can be classified based on the type of constraints, nature of design
variables, physical structure of the problem, nature of the equations involved, deterministic
nature of the variables, permissible value of the design variables, separability of the functions
and number of objective functions. These classifications are briefly discussed below.
Classification based on existence of constraints.
Under this category optimizations problems can be classified into two groups as follows:
Constrained optimization problems: which are subject to one or more constraints.
Unconstrained optimization problems: in which no constraints exist.
Classification based on the nature of the design variables.
There are two broad categories in this classification.
(i) In the first category the objective is to find a set of design parameters that makes a
prescribed function of these parameters minimum or maximum subject to certain constraints.
For example to find the minimum weight design of a strip footing with two loads shown in
Fig 1 (a) subject to a limitation on the maximum settlement of the structure can be stated as
follows.
Find X =
d
bwhich minimizes
f(X) =h(b,d)
Subject to the constraints (s X ) max ; b 0 ; d 0
where s is the settlement of the footing. Such problems are called parameter or static
optimization problems.
-
7/28/2019 Optimization Methods (Lecture India)
18/267
Optimization Methods: Introduction and Basic Concepts
D Nagesh Kumar, IISc, Bangalore
2
M1L3
It may be noted that, for this particular example, the length of the footing (l), the loads P1 and
P2 and the distance between the loads are assumed to be constant and the required
optimization is achieved by varying b and d.
(ii) In the second category of problems, the objective is to find a set of design parameters,
which are all continuous functions of some other parameter that minimizes an objective
function subject to a set of constraints. If the cross sectional dimensions of the rectangular
footings are allowed to vary along its length as shown in Fig 3.1 (b), the optimization
problem can be stated as :
Find X(t) =
)(
)(
td
tbwhich minimizes
f(X) =g( b(t), d(t) )
Subject to the constraints
(s X(t)) max 0 t l
b(t) 0 0 t l
d(t) 0 0 t l
The length of the footing (l) the loads P1 and P2 , the distance between the loads are assumed
to be constant and the required optimization is achieved by varying b and d along the length l.
Here the design variables are functions of the length parameter t. this type of problem, where
each design variable is a function of one or more parameters, is known as trajectory or
dynamic optimization problem.
(a) (b)
Figure 1
l l
P1
P2
d
b
P2
P1
b(t)
d(t)
t
-
7/28/2019 Optimization Methods (Lecture India)
19/267
Optimization Methods: Introduction and Basic Concepts
D Nagesh Kumar, IISc, Bangalore
3
M1L3
Classification based on the physical structure of the problem
Based on the physical structure, optimization problems are classified as optimal control and
non-optimal control problems.
(i) Optimal control problems
An optimal control (OC) problem is a mathematical programming problem involving a
number of stages, where each stage evolves from the preceding stage in a prescribed manner.
It is defined by two types of variables: the control or design and state variables. The control
variables define the system and controls how one stage evolves into the next. The state
variables describe the behavior or status of the system at any stage. The problem is to find a
set of control variables such that the total objective function (also known as the performance
index, PI) over all stages is minimized, subject to a set of constraints on the control and state
variables. An OC problem can be stated as follows:
Find X which minimizesf(X) = ),(1
ii
l
i
i yxf=
Subject to the constraints
1),( +=+ iiiii yyyxq i = 1, 2, ., l
0)( jj xg , j = 1, 2, ., l
0)( kk yh , k= 1, 2, ., l
Wherexi is the ith control variable,yi is the ith state variable, andfi is the contribution of the
ith stage to the total objective function. gj, hk, and qi are the functions ofxj,yj ;xk,yk andxi and
yi, respectively, and l is the total number of states. The control and state variables xi and yi
can be vectors in some cases.
(ii) Problems which are not optimal control problems are called non-optimal control
problems.
Classification based on the nature of the equations involved
Based on the nature of equations for the objective function and the constraints, optimization
problems can be classified as linear, nonlinear, geometric and quadratic programming
problems. The classification is very useful from a computational point of view since many
-
7/28/2019 Optimization Methods (Lecture India)
20/267
Optimization Methods: Introduction and Basic Concepts
D Nagesh Kumar, IISc, Bangalore
4
M1L3
predefined special methods are available for effective solution of a particular type of
problem.
(i)Linear programming problem
If the objective function and all the constraints are linear functions of the design variables,
the optimization problem is called a linear programming problem (LPP). A linear
programming problem is often stated in the standard form :
Find X =
nx
x
x
.
.
2
1
Which maximizesf(X) = i
n
i
ixc=1
Subject to the constraints
ji
n
i
ij bxa ==1
, j = 1, 2, . . . , m
xi 0 , j = 1, 2, . . . , m
where ci, aij, and bj are constants.
(ii)Nonlinear programming problem
If any of the functions among the objectives and constraint functions is nonlinear, the
problem is called a nonlinear programming (NLP)problem. This is the most general form of
a programming problem and all other problems can be considered as special cases of the NLP
problem.
(iii) Geometric programming problem
A geometric programming (GMP) problem is one in which the objective function and
constraints are expressed as polynomials in X. A function h(X) is called apolynomial (with
m terms) ifh can be expressed as
nma
nmama
mna
n
aana
n
aa
xxxcxxxcxxxcXh LLLL2
21
1222
212
12121
211
11)( +++=
-
7/28/2019 Optimization Methods (Lecture India)
21/267
Optimization Methods: Introduction and Basic Concepts
D Nagesh Kumar, IISc, Bangalore
5
M1L3
where cj ( mj ,,1L= ) and aij ( ni ,,1L= and mj ,,1L= ) are constants with 0jc and
0ix .
Thus GMPproblems can be posed as follows:
Find X which minimizes
f(X) = ,0
1 1
= =
N
j
n
i
ija
ijxc C cj > 0, xi > 0
subject to
gk(X) = ,01 1
= =
>
kN
j
n
i
ijkqijk
xa C ajk> 0, xi > 0, k= 1,2,..,m
whereN0 andNkdenote the number of terms in the objective function and in the kth
constraint
function, respectively.
(iv) Quadratic programming problem
A quadratic programming problem is the best behaved nonlinear programming problem with
a quadratic objective function and linear constraints and is concave (for maximization
problems). It can be solved by suitably modifying the linear programming techniques. It is
usually formulated as follows:
F(X) = = ==
++
n
i
n
j
jiij
n
i
ii xxQxqc1 11
Subject to
,1
j
n
i
iij bxa ==
j = 1,2,.,m
xi 0 , i = 1,2,.,n
where c, qi, Qij, aij, and bj are constants.
Classification based on the permissible values of the decision variables
Under this classification, objective functions can be classified as integer and real-valued
programming problems.
-
7/28/2019 Optimization Methods (Lecture India)
22/267
Optimization Methods: Introduction and Basic Concepts
D Nagesh Kumar, IISc, Bangalore
6
M1L3
(i)Integer programming problem
If some or all of the design variables of an optimization problem are restricted to take only
integer (or discrete) values, the problem is called an integer programming problem. For
example, the optimization is to find number of articles needed for an operation with least
effort. Thus, minimization of the effort required for the operation being the objective, the
decision variables, i.e. the number of articles used can take only integer values. Other
restrictions on minimum and maximum number of usable resources may be imposed.
(ii)Real-valued programming problem
A real-valued problem is that in which it is sought to minimize or maximize a real function
by systematically choosing the values of real variables from within an allowed set. When the
allowed set contains only real values, it is called a real-valued programming problem.
Classification based on deterministic nature of the variables
Under this classification, optimization problems can be classified as deterministic or
stochastic programming problems.
(i) Stochastic programming problem
In this type of an optimization problem, some or all the design variables are expressed
probabilistically (non-deterministic or stochastic). For example estimates of life span of
structures which have probabilistic inputs of the concrete strength and load capacity is a
stochastic programming problem as one can only estimate stochastically the life span of the
structure.
(ii)Deterministic programming problem
In this type of problems all the design variables are deterministic.
Classification based on separability of the functions
Based on this classification, optimization problems can be classified as separable and non-
separable programming problems based on the separability of the objective and constraint
functions.
(i) Separable programming problems
In this type of a problem the objective function and the constraints are separable. A function
is said to be separable if it can be expressed as the sum of n single-variable functions,( ) ( ) ( )
nnixfxfxf ,..., 221 , i.e.
-
7/28/2019 Optimization Methods (Lecture India)
23/267
Optimization Methods: Introduction and Basic Concepts
D Nagesh Kumar, IISc, Bangalore
7
M1L3
( )=
=
n
i
ii xfXf1
)(
and separable programming problem can be expressed in standard form as :
Find X whichminimizes ( )=
=
n
i
ii xfXf1
)(
subject to
( ) jn
i
iijj bxgXg = =1
)( , j = 1,2,. . . , m
where bj is a constant.
Classification based on the number of objective functions
Under this classification, objective functions can be classified as single-objective and multi-
objective programming problems.
(i)Single-objective programming problem in which there is only a single objective function.
(ii)Multi-objective programming problem
A multiobjective programming problem can be stated as follows:
Find X which minimizes ( ) ( ) ( )XfXfXf k,..., 21
Subject to
gj(X) 0 , j = 1, 2, . . . , m
wheref1, f2, . . . fkdenote the objective functions to be minimized simultaneously.
For example in some design problems one might have to minimize the cost and weight of the
structural member for economy and, at the same time, maximize the load carrying capacity
under the given constraints.
-
7/28/2019 Optimization Methods (Lecture India)
24/267
Optimization Methods: Introduction and Basic Concepts 1
Module 1 Lecture Notes 4
Classical and Advanced Techniques for Optimization
In the previous lecture having understood the various classifications of optimization
problems, let us move on to understand the classical and advanced optimization techniques.
Classical Optimization Techniques
The classical optimization techniques are useful in finding the optimum solution or
unconstrained maxima or minima of continuous and differentiable functions. These are
analytical methods and make use of differential calculus in locating the optimum solution.
The classical methods have limited scope in practical applications as some of them involve
objective functions which are not continuous and/or differentiable. Yet, the study of these
classical techniques of optimization form a basis for developing most of the numerical
techniques that have evolved into advanced techniques more suitable to todays practical
problems. These methods assume that the function is differentiable twice with respect to the
design variables and that the derivatives are continuous. Three main types of problems can be
handled by the classical optimization techniques, viz., single variable functions, multivariable
functions with no constraints and multivariable functions with both equality and inequality
constraints. For problems with equality constraints the Lagrange multiplier method can be
used. If the problem has inequality constraints, the Kuhn-Tucker conditions can be used to
identify the optimum solution. These methods lead to a set of nonlinear simultaneous
equations that may be difficult to solve. These classical methods of optimization are further
discussed in Module 2.
The other methods of optimization include
Linear programming: studies the case in which the objective functionfis linear andthe set A is specified using only linear equalities and inequalities. (A is the design
variable space)
Integer programming: studies linear programs in which some or all variables areconstrained to take on integer values.
Quadratic programming: allows the objective function to have quadratic terms,while the set A must be specified with linear equalities and inequalities.
D Nagesh Kumar, IISc, Bangalore M1L4
-
7/28/2019 Optimization Methods (Lecture India)
25/267
Optimization Methods: Introduction and Basic Concepts 2
Nonlinear programming: studies the general case in which the objective function orthe constraints or both contain nonlinear parts.
Stochastic programming: studies the case in which some of the constraints dependon random variables.
Dynamic programming: studies the case in which the optimization strategy is basedon splitting the problem into smaller sub-problems.
Combinatorial optimization: is concerned with problems where the set of feasiblesolutions is discrete or can be reduced to a discrete one.
Infinite-dimensional optimization: studies the case when the set of feasible solutionsis a subset of an infinite-dimensional space, such as a space of functions.
Constraint satisfaction: studies the case in which the objective functionfis constant(this is used in artificial intelligence, particularly in automated reasoning).
Most of these techniques will be discussed in subsequent modules.
Advanced Optimization Techniques
Hill climbingHill climbing is a graph search algorithm where the current path is extended with a
successor node which is closer to the solution than the end of the current path.
In simple hill climbing, the first closer node is chosen whereas in steepest ascent hill
climbing all successors are compared and the closest to the solution is chosen. Both
forms fail if there is no closer node. This may happen if there are local maxima in the
search space which are not solutions. Steepest ascent hill climbing is similar to best
first search but the latter tries all possible extensions of the current path in order,
whereas steepest ascent only tries one.
Hill climbing is used widely in artificial intelligence fields, for reaching a goal state
from a starting node. Choice of next node starting node can be varied to give a
number of related algorithms.
D Nagesh Kumar, IISc, Bangalore M1L4
-
7/28/2019 Optimization Methods (Lecture India)
26/267
Optimization Methods: Introduction and Basic Concepts 3
Simulated annealingThe name and inspiration come from annealing process in metallurgy, a technique
involving heating and controlled cooling of a material to increase the size of its
crystals and reduce their defects. The heat causes the atoms to become unstuck from
their initial positions (a local minimum of the internal energy) and wander randomly
through states of higher energy; the slow cooling gives them more chances of finding
configurations with lower internal energy than the initial one.
In the simulated annealing method, each point of the search space is compared to a
state of some physical system, and the function to be minimized is interpreted as the
internal energy of the system in that state. Therefore the goal is to bring the system,
from an arbitrary initial state, to a state with the minimum possible energy.
Genetic algorithmsA genetic algorithm (GA) is a search technique used in computer science to find
approximate solutions to optimization and search problems. Specifically it falls into
the category of local search techniques and is therefore generally an incomplete
search. Genetic algorithms are a particular class of evolutionary algorithms that use
techniques inspired by evolutionary biology such as inheritance, mutation, selection,
and crossover (also called recombination).
Genetic algorithms are typically implemented as a computer simulation. in which a
population of abstract representations (called chromosomes) of candidate solutions
(called individuals) to an optimization problem, evolves toward better solutions.
Traditionally, solutions are represented in binary as strings of 0s and 1s, but different
encodings are also possible. The evolution starts from a population of completely
random individuals and occur in generations. In each generation, the fitness of the
whole population is evaluated, multiple individuals are stochastically selected from
the current population (based on their fitness), and modified (mutated or recombined)
to form a new population. The new population is then used in the next iteration of the
algorithm.
D Nagesh Kumar, IISc, Bangalore M1L4
-
7/28/2019 Optimization Methods (Lecture India)
27/267
Optimization Methods: Introduction and Basic Concepts 4
Ant colony optimizationIn the real world, ants (initially) wander randomly, and upon finding food return to
their colony while laying down pheromone trails. If other ants find such a path, they
are likely not to keep traveling at random, but instead follow the trail laid by earlier
ants, returning and reinforcing it, if they eventually find any food.
Over time, however, the pheromone trail starts to evaporate, thus reducing its
attractive strength. The more time it takes for an ant to travel down the path and back
again, the more time the pheromones have to evaporate. A short path, by comparison,
gets marched over faster, and thus the pheromone density remains high as it is laid on
the path as fast as it can evaporate. Pheromone evaporation has also the advantage of
avoiding the convergence to a local optimal solution. If there was no evaporation at
all, the paths chosen by the first ants would tend to be excessively attractive to the
following ones. In that case, the exploration of the solution space would be
constrained.
Thus, when one ant finds a good (short) path from the colony to a food source, other
ants are more likely to follow that path, and such positive feedback eventually leaves
all the ants following a single path. The idea of the ant colony algorithm is to mimic
this behavior with "simulated ants" walking around the search space representing the
problem to be solved.
Ant colony optimization algorithms have been used to produce near-optimal solutions
to the traveling salesman problem. They have an advantage over simulated annealing
and genetic algorithm approaches when the graph may change dynamically. The ant
colony algorithm can be run continuously and can adapt to changes in real time. This
is of interest in network routing and urban transportation systems.
D Nagesh Kumar, IISc, Bangalore M1L4
-
7/28/2019 Optimization Methods (Lecture India)
28/267
Optimization Methods: Introduction and Basic Concepts 5
References / Further Reading:
1. Deb K.,Multi-Objective Optimization using Evolutionary Algorithms, First Edition, JohnWiley & Sons Pte Ltd, 2002.
2. Deb K., Optimization for Engineering Design Algorithms and Examples, Prentice Hallof India Pvt. Ltd., New Delhi, 1995.
3. Dorigo M., and T. Stutzle,Ant Colony Optimization, Prentice Hall of India Pvt. Ltd., NewDelhi, 2005.
4. Hillier F.S. and G.J. Lieberman, Operations Research, CBS Publishers & Distributors,New Delhi, 1987.
5. Jain S.K. and V.P. Singh, Water Resources Systems Planning and Management, ElsevierB.V., The Netherlands, 2003.
6. Loucks, D.P., J.R. Stedinger, and D.A. Haith, Water Resources Systems Planning andAnalysis, Prentice Hall, N.J., 1981.
7. Mays, L.W. and K. Tung, Hydrosystems Engineering and Management, McGraw-HillInc., New York, 1992.
8. Rao S.S., Engineering Optimization Theory and Practice, Third Edition, New AgeInternational Limited, New Delhi, 2000
9. Ravindran A., D.T. Phillips and J.J. Solberg, Operations Research Principles andPractice, John Wiley & Sons, New York, 2001.
10.Taha H.A., Operations Research An Introduction, Prentice-Hall of India Pvt. Ltd., NewDelhi, 2005.
11.Vedula S., and P.P. Mujumdar, Water Resources Systems: Modelling Techniques andAnalysis, Tata McGraw Hill, New Delhi, 2005.
D Nagesh Kumar, IISc, Bangalore M1L4
-
7/28/2019 Optimization Methods (Lecture India)
29/267
Optimization Methods: Optimization using Calculus - Learning Objectives
Module 2: Optimization using Calculus
Learning Objectives
Optimization problems with continuous differentiable functions can be solved using the
classical methods of optimization. These analytical methods employ differential calculus to
locate the optimum points. The classical optimization techniques fail to have an application
where the functions are not continuous and not differentiable, and this happens with many
practical problems. However a study of these calculus based methods is a foundation for
development of most of the numerical techniques presented in later modules.
In this module a brief introduction to stationary points is followed by a presentation of the
necessary and sufficient conditions in locating the optimum solution of a single variable and
two variable functions. Convexity and concavity of these functions are explained. Then the
reader is introduced to the optimization of functions or single and multivariable functions
(with and without equality constraints). A few examples are discussed for each type. An
insight is also given to the Lagrangian function and Hessian matrix formulation. Finally we
take a look at the Kuhn-Tucker conditions with examples.
This module will help the reader to know about
1. Stationary points as maxima, minima and points of inflection2. Concavity and convexity of functions
3. Necessary and sufficient conditions for optimization for both single and multivariablefunctions
4. The Hessian matrix5. Optimization of multivariable function with and without equality constraints6. Kuhn-Tucker conditions
D Nagesh Kumar, IISc, Bangalore M2LO
-
7/28/2019 Optimization Methods (Lecture India)
30/267
Optimization Methods: Optimization using Calculus-Stationary Points 1
Module - 2 Lecture Notes 1
Stationary points: Functions of Single and Two Variables
Introduction
In this session, stationary points of a function are defined. The necessary and sufficient
conditions for the relative maximum of a function of single or two variables are also
discussed. The global optimum is also defined in comparison to the relative or local optimum.
Stationary points
For a continuous and differentiable functionf(x) a stationary pointx* is a point at which the
slope of the function vanishes, i.e. f (x) = 0 at x = x*, where x*belongs to its domain of
definition.
minimum inflection pointmaximum
Fig. 1
A stationary point may be a minimum, maximum or an inflection point (Fig. 1).
Relative and Global Optimum
A function is said to have a relative orlocal minimum at x = x* if * *( ) ( )f x f x h + for all
sufficiently small positive and negative values ofh, i.e. in the near vicinity of the point x*.
Similarly a point x* is called a relative orlocal maximum if * *( ) ( )f x f x h +
*( ) ( )
for all values
ofh sufficiently close to zero. A function is said to have a global orabsolute minimum atx =
x* if f x f x for allx in the domain over which f(x) is defined. Similarly, a function is
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
31/267
Optimization Methods: Optimization using Calculus-Stationary Points 2
said to have a global orabsolute maximum atx = x* if *( ) ( )f x f x for allx in the domain
over whichf(x) is defined.
Figure 2 shows the global and local optimum points.
a b a bx x
f(x)f(x)
.
. .
..
.A1
B1
B2
A3
A2 Relative minimum is alsoglobal optimum (since only
one minimum point is there)
A1, A2, A3 = Relative maximaA2 = Global maximum
B1, B2 = Relative minimaB1 = Global minimum
Fig. 2
Functions of a single variable
Consider the functionf(x) defined fora x b . To find the value ofx* such thatx =
x
[ , ]a b
* maximizes f(x) we need to solve a single-variable optimization problem. We have the
following theorems to understand the necessary and sufficient conditions for the relative
maximum of a function of a single variable.
Necessary condition: For a single variable function f(x) defined forx which has a
relative maximum atx = x
[ , ]a b*,x* if the derivative[ , ]a b ) /'( ) (f X df x dx= existsas a finite
number atx = x* thenf(x*) = 0. This can be understood from the following.
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
32/267
Optimization Methods: Optimization using Calculus-Stationary Points 3
Proof.
Sincef(x*) is stated to exist, we have
'
0
( * ) ( *)( *) lim
h
f x h f xf x
h
+ = (1)
From our earlier discussion on relative maxima we have ( *) ( * )f x f x h + for . Hence0h
( * ) ( *)0
f x h f x
h
+ h < 0 (2)
( * ) ( *)0
f x h f x
h
+ h > 0 (3)
which implies for substantially small negative values of h we have and for
substantially small positive values ofh we have
( *) 0f x
( *) 0f x . In order to satisfy both (2) and
(3), f( *)x
(
= 0. Hence this gives the necessary condition for a relative maxima at x = x* for
)x .f
It has to be kept in mind that the above theorem holds good for relative minimum as well.
The theorem only considers a domain where the function is continuous and differentiable. It
cannot indicate whether a maxima or minima exists at a point where the derivative fails to
exist. This scenario is shown in Fig 3, where the slopes m1 and m2 at the point of a maxima
are unequal, hence cannot be found as depicted by the theorem by failing for continuity. The
theorem also does not consider if the maxima or minima occurs at the end point of the
interval of definition, owing to the same reason that the function is not continuous, therefore
not differentiable at the boundaries. The theorem does not say whether the function will have
a maximum or minimum at every point wheref(x) = 0, since this conditionf(x) = 0 is for
stationary points which include inflection points which do not mean a maxima or a minima.
A point of inflection is shown already in Fig.1
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
33/267
Optimization Methods: Optimization using Calculus-Stationary Points 4
f(x)
a bx
m2m1f(x*)
x*
Fig. 3
Sufficient condition: For the same function stated above letf(x*) =f(x*) = . . . =f(n-1)(x*)
= 0, butf
(n)
(x
*
) 0, then it can be said thatf(x *
) is (a) a minimum value off(x) iff
(n)
(x
*
) > 0and n is even; (b) a maximum value off (x) iff (n)(x*) < 0 and n is even; (c) neither a
maximum or a minimum ifn is odd.
Proof
Applying the Taylors theorem with remainder aftern terms, we have
2 1( 1)
( * ) ( *) '( *) ''( *) ... ( *) ( * )2! ( 1)! !
n nn nh h h
f x h f x hf x f x f x f x hn n
+ = + + + + + +
(4)
for 0
-
7/28/2019 Optimization Methods (Lecture India)
34/267
Optimization Methods: Optimization using Calculus-Stationary Points 5
f (n)(x*) is negative, with f(x) concave aroundx*. When n is odd,!n
nhchanges sign with the
change in the sign ofh and hence the pointx* is neither a maximum nor a minimum. In this
case the pointx* is called apoint of inflection.
Example 1.
Find the optimum value of the function 2( ) 3 5f x x x= + and also state if the function
attains a maximum or a minimum.
Solution
'( ) 2 3 0f x x= + = for maxima or minima.
orx* = -3/2
''( *) 2f x = which is positive hence the pointx* = -3/2 is a point of minima and the function
attains a minimum value of -29/4 at this point.
Example 2.
Find the optimum value of the function and also state if the function attains a
maximum or a minimum.
4( ) ( 2)f x x=
Solution
3'( ) 4( 2) 0f x x= = for maxima or minima.
orx =x* = 2 for maxima or minima.
2''( *) 12( * 2) 0f x x= = at x* = 2
'''( *) 24( * 2) 0f x x= = atx* = 2
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
35/267
Optimization Methods: Optimization using Calculus-Stationary Points 6
24* = xf atx* = 2
Hencefn(x) is positive and n is even hence the pointx = x*= 2 is a point of minimum and the
function attains a minimum value of 0 at this point.
Example 3.
Analyze the function and classify the stationary points as
maxima, minima and points of inflection.
5 4 3( ) 12 45 40 5f x x x x= + +
Solution
4 3 2
4 3 2
'( ) 60 180 120 0
3 2 0
or 0,1,2
f x x x x
x x x
x
= + =
=> + =
=
Consider the pointx =x*
= 0
'' * * 3 * 2 *( ) 240( ) 540( ) 240 0f x x x x= + =
atx* = 0
''' * * 2 *( ) 720( ) 1080 240 240f x x x= + = atx* = 0
Since the third derivative is non-zero,x =x* = 0 is neither a point of maximum or minimum
but it is a point of inflection.
Considerx = x*= 1
'' * * 3 * 2 *( ) 240( ) 540( ) 240 60f x x x x= + = atx* = 1
Since the second derivative is negative the pointx = x* = 1 is a point of local maxima with a
maximum value off(x) = 12 45 + 40 + 5 = 12
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
36/267
Optimization Methods: Optimization using Calculus-Stationary Points 7
Considerx = x* = 2
'' * * 3 * 2 *( ) 240( ) 540( ) 240 240f x x x x= + = atx* = 2
Since the second derivative is positive, the pointx = x*
= 2 is a point of local minima with a
minimum value off(x) = -11
Example 4.
The horse power generated by a Pelton wheel is proportional to u(v-u) where u is the velocity
of the wheel, which is variable andv is the velocity of the jet which is fixed. Show that the
efficiency of the Pelton wheel will be maximum at u = v/2.
Solution
K. ( )
0 K 2K
or
2
0
f u v u
fv u
u
vu
=
= => =
=
where K is a proportionality constant (assumed positive).
2
2
2
2Kv
u
f
u=
=
which is negative.
Hence,fis maximum at2vu =
Functions of two variables
This concept may be easily extended to functions of multiple variables. Functions of two
variables are best illustrated by contour maps, analogous to geographical maps. A contour is a
line representing a constant value off(x) as shown in Fig.4. From this we can identify
maxima, minima andpoints of inflection.
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
37/267
Optimization Methods: Optimization using Calculus-Stationary Points 8
Necessary conditions
As can be seen in Fig. 4 and 5, perturbations from points of local minima in any direction
result in an increase in the response function f(x), i.e. the slope of the function is zero at this
point of local minima. Similarly, at maxima andpoints of inflection as the slope is zero, the
first derivatives of the function with respect to the variables are zero.
Which gives us1 2
0; 0f f
x x
= =
at the stationary points, i.e., the gradient vector off(X), xf
at X = X* = [x1, x2] defined as follows, must equal zero:
1
2
( *)
0
( *)x
f
xf
f
x
= =
This is the necessary condition.
x2
x1
Fig. 4
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
38/267
Optimization Methods: Optimization using Calculus-Stationary Points 9
Global maxima
Relative maxima
Relative minima
Global minima
Fig. 5
Sufficient conditions
Consider the following second order derivatives:
2 2 2
2 2
1 2 1
; ;2
f f f
x x x x
The Hessian matrix defined by H is made using the above second order derivatives.
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
39/267
Optimization Methods: Optimization using Calculus-Stationary Points 10
1 2
2 2
2
1 1 2
2 2
2
1 2 2 [ , ]x x
f f
x x x
f f
x x x
=
H
a) IfH is positive definite then the point X = [x1, x2] is a point of local minima.b) IfH is negative definite then the point X = [x1, x2] is a point of local maxima.c) IfH is neither then the point X = [x1, x2] is neither a point of maxima nor minima.
A square matrix is positive definite if all its eigen values are positive and it is negative
definite if all its eigen values are negative. If some of the eigen values are positive and some
negative then the matrix is neither positive definite or negative definite.
To calculate the eigen values of a square matrix then the following equation is solved.
0 =A I
The above rules give the sufficient conditions for the optimization problem of two variables.
Optimization of multiple variable problems will be discussed in detail in lecture notes 3
(Module 2).
Example 5.
Locate the stationary points off(X) and classify them as relative maxima, relative minima or
neither based on the rules discussed in the lecture.
f(X) = + 5
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
40/267
Optimization Methods: Optimization using Calculus-Stationary Points 11
Solution
From2
(X) 0f
x
=
, 1 22 2x x= +
From1
(X) 0f
x
=
2
2 28 14 3x x 0+ + =
2 2(2 3)(4 1) 0x x+ + =
2 23/ 2 or 1/ 4x x= =
so the two stationary points are
X1 = [-1,-3/2]
and
X2 = [3/2,-1/4]
The Hessian off(X) is
2 2 2 2
12 2
1 2 1 2 2 1
4 ; 4; 2f f f fxx x x x x x
= = = =
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
41/267
Optimization Methods: Optimization using Calculus-Stationary Points 12
14 2
2 4
x =
H
1
4 2
2 4
x
=
I - H
At X1= [-1,-3/2],
4 2( 4)( 4) 4
2 4
+= = +
I - H 0 =
2 16 4 0 =
2 = 12
1 212 12 = + =
Since one eigen value is positive and one negative, X1 is neither a relative maximum nor a
relative minimum.
At X2 = [3/2,-1/4]
6 2( 6)( 4) 4
2 4
= =
I - H 0=
2 10 20 0 + =
1 25 5 5 5 = + =
Since both the eigen values are positive, X2 is a local minimum.
Minimum value of f(x) is -0.375.
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
42/267
Optimization Methods: Optimization using Calculus-Stationary Points 13
Example 6
The ultimate strength attained by concrete is found to be based on a certain empirical
relationship between the ratios of cement and concrete used. Our objective is to maximize
strength attained by hardened concrete, given byf(X) = , wherex2 21 1 2 220 2 6 3 / 2x x x x+ + 1
andx2 are variables based on cement and concrete ratios.
Solution
Givenf(X) = ; where X =21 1 2 220 2 6 3 / 2x x x x+ + 2 [ ]1 2,x x
The gradient vector1 1
2
2
( *)2 2 0
6 3 0( *)
x
f
x xf
xf
x
= = =
, to determine stationary point X*.
Solving we get X* = [1,2]
2 2 2
2 2
1 2 1 2
2; 3; 0f f f
x x x x
= =
=
2 0
0 3
=
H
2 0( 2)( 3)
0 3
+
= = + ++I - H 0 =
Here the values of do not depend on X and 1 = -2, 2 = -3. Since both the eigen values
are negative, f(X) is concave and the required ratio x1:x2 = 1:2 with a global maximum
strength off(X) = 27 units.
D Nagesh Kumar, IISc, Bangalore M2L1
-
7/28/2019 Optimization Methods (Lecture India)
43/267
Optimization Methods: Optimization using Calculus-Convexity and Concavity 1
Module 2 Lecture Notes 2
Convexity and Concavity of Functions of One and Two Variables
Introduction
In the previous class we studied about stationary points and the definition of relative and
global optimum. The necessary and sufficient conditions required for a relative optimum in
functions of one variable and its extension to functions of two variables was also studied. In
this lecture, determination of the convexity and concavity of functions is discussed.
The analyst must determine whether the objective functions and constraint equations are
convex or concave. In real-world problems, if the objective function or the constraints are not
convex or concave, the problem is usually mathematically intractable.
Functions of one variable
Convex function
A real-valued function fdefined on an interval (or on any convex subset Cof some vector
space) is called convex, if for any two points a and b in its domain Cand any tin [0,1], we
have
)()1()())1(( bftatfbttaf ++
Fig. 1
In other words, a function is convex if and only if its epigraph (the set of points lying on or
above the graph) is a convex set. A function is also said to be strictly convex if
D Nagesh Kumar, IISc, Bangalore M2L2
-
7/28/2019 Optimization Methods (Lecture India)
44/267
Optimization Methods: Optimization using Calculus-Convexity and Concavity 2
( (1 ) ) ( ) (1 ) ( )f ta t b tf a t f b+ < +
for any tin (0,1) and a line connecting any two points on the function lies completely above
the function. These relationships are illustrated in Fig. 1.
Testing for convexity of a single variable function
A function is convex if its slope is non decreasing or 2 2f / x 0. It is strictly convex if its
slope is continually increasing or > 0 throughout the function.
Properties of convex functions
A convex function f, defined on some convex open interval C, is continuous on C and
differentiable at all or at most, countable many points. If C is closed, then fmay fail to be
continuous at the end points ofC.
A continuous function on an interval Cis convex if and only if
( ) ( )
2 2
a b f a f bf
+ +
for all a and b in C.
A differentiable function of one variable is convex on an interval if and only if its derivative
is monotonically non-decreasing on that interval.
A continuously differentiable function of one variable is convex on an interval if and only if
the function lies above all of its tangents: ( ) ( ) '( )( )f b f a f a b a + for all a and b in the
interval.
A twice differentiable function of one variable is convex on an interval if and only if its
second derivative is non-negative in that interval; this gives a practical test for convexity. If
its second derivative is positive then it is strictly convex, but the converse does not hold, as
shown byf(x) =x4.
More generally, a continuous, twice differentiable function of several variables is convex on
a convex set if and only if its Hessian matrix is positive semi definite on the interior of the
convex set.
D Nagesh Kumar, IISc, Bangalore M2L2
-
7/28/2019 Optimization Methods (Lecture India)
45/267
Optimization Methods: Optimization using Calculus-Convexity and Concavity 3
If two functionsfand g are convex, then so is any weighted combination af+ bg with non-
negative coefficients a and b. Likewise, iffand g are convex, then the function max{f,g} is
convex.
A strictly convex function will have only one minimum which is also the global minimum.
Examples
The second derivative ofx2 is 2; it follows thatx2 is a convex function ofx. The absolute value function |x| is convex, even though it does not have a derivative at
x = 0.
The functionfwith domain [0,1] defined byf(0)=f(1)=1,f(x)=0 for 0
-
7/28/2019 Optimization Methods (Lecture India)
46/267
Optimization Methods: Optimization using Calculus-Convexity and Concavity 4
[0,1], ( (1 ) ) ( ) (1 ) ( )t f ta t b tf a t f + + b
Additionally,f(x) is strictly concave if
[0,1], ( (1 ) ) ( ) (1 ) ( )t f ta t b tf a t f b + > +
These relationships are illustrated in Fig. 2
Fig. 2
Testing for concavity of a single variable function
A function is concave if its slope is non increasing or 2 2/f x 0. It is strictly concave if its
slope is continually decreasing or 2 2/f x < 0 throughout the function.
Properties of a concave functions
A continuous function on Cis concave if and only if
( ) ( )
2 2
a b f a f bf
+ +
for anyx andy in C.
Equivalently, f(x) is concave on [a, b] if and only if the function f(x) is convex on every
subinterval of [a, b].
D Nagesh Kumar, IISc, Bangalore M2L2
-
7/28/2019 Optimization Methods (Lecture India)
47/267
Optimization Methods: Optimization using Calculus-Convexity and Concavity 5
Iff(x) is twice-differentiable, then f(x) is concave if and only iff (x) is non-positive. If its
second derivative is negative then it is strictly concave, but the opposite is not true, as shown
byf(x) = -x4.
A function is called quasiconcave if and only if there is an x0 such that for allx x0 it is non-increasing. x0 can also be , making the
function non-decreasing (non-increasing) for all
x. The opposite of quasiconcave is
quasiconvex.
Example 1
Consider the example in lecture notes 1 for a function of two variables. Locate the stationary
points of and find out if the function is convex, concave or
neither at the points of optima based on the testing rules discussed above.
5 4 3
( ) 12 45 40 5f x x x x= + +
Solution
4 3 2
4 3 2
'( ) 60 180 120 0
3 2 0
or 0,1,2
f x x x x
x x x
x
= + =
=> + =
=
Consider the pointx =x* = 0
3 2
''( *) 240( *) 540( *) 240 * 0f x x x x= + =
atx * = 0
2'''( *) 720( *) 1080 * 240 240f x x x= + = atx * = 0
Since the third derivative is non-zerox =x* = 0 is neither a point of maximum or minimum
but it is a point of inflection. Hence the function is neither convex nor concave at this point.
Considerx = x* = 1
3 2''( *) 240( *) 540( *) 240 * 60f x x x x= + = atx* = 1
Since the second derivative is negative, the pointx = x* = 1 is a point of local maxima with a
maximum value off(x) = 12 45 + 40 + 5 = 12. At this point the function is concave since
2 2/f x < 0.
Considerx = x* = 2
3 2''( *) 240( *) 540( *) 240 * 240f x x x x= + = atx* = 2
Since the second derivative is positive, the pointx = x* = 2 is a point of local minima with a
minimum value off(x) = -11. At this point the function is convex since 2 2/f x > 0.
D Nagesh Kumar, IISc, Bangalore M2L2
-
7/28/2019 Optimization Methods (Lecture India)
48/267
Optimization Methods: Optimization using Calculus-Convexity and Concavity 6
Functions of two variables
A function of two variables,f(X) where X is a vector = [x1,x2], is strictly convex if
1 2 1( (1 ) ) ( ) (1 ) ( )f t t tf t f + < + 2
2
where X1 and X2 are points located by the coordinates given in their respective vectors.
Similarly a two variable function is strictly concave if
1 2 1( (1 ) ) ( ) (1 ) ( )f t t tf t f + > +
Contour plot of a convex function is illustrated in Fig. 3
340
Fig. 3
Contour plot of a convex function is shown in Fig. 4
Fig. 4
450
70x2
120
x1
x2
x1
110
210
305
40
D Nagesh Kumar, IISc, Bangalore M2L2
-
7/28/2019 Optimization Methods (Lecture India)
49/267
Optimization Methods: Optimization using Calculus-Convexity and Concavity 7
To determine convexity or concavity of a function of multiple variables, the eigenvalues of
its Hessian matrix are examined and the following rules apply.
(a)If all eigenvalues of the Hessian are positive the function is strictly convex.(b)If all eigenvalues of the Hessian are negative the function is strictly concave.(c)If some eigenvalues are positive and some are negative, or if some are zero, the
function is neither strictly concave nor strictly convex.
Example 2
Consider the example in lecture notes 1 for a function of two variables. Locate the stationary
points off(X) and find out if the function is convex, concave or neither at the points of
optima based on the rules discussed in this lecture.
f(X) = 3 21 1 2 1 22 / 3 2 5 2 4 2x x x x x x + + + 5
Solution
21 1 2
1 2
2
( *)02 2 5
02 4 4( *)
x
f
x x xf
f x x
x
= = = + +
Solving the above the two stationary points are
X1 = [-1,-3/2]
and
X2 = [3/2,-1/4]
The Hessian off(X) is
2 2 2 2
12 2
1 2 1 2 2 1
4 ; 4; 2f f f f
xx x x x x x
= = = =
14 2
2 4
x =
H
14 2
2 4
x
=
I - H
At X1
4 2( 4)( 4) 4
2 4
+= = +
I - H 0 =
2 16 4 0 =
D Nagesh Kumar, IISc, Bangalore M2L2
-
7/28/2019 Optimization Methods (Lecture India)
50/267
Optimization Methods: Optimization using Calculus-Convexity and Concavity 8
2 = 12
1 212 12 = + =
Since one eigen value is positive and one negative, X1 is neither a relative maximum nor a
relative minimum. Hence at X1 the function is neither convex nor concave.
At X2 = [3/2,-1/4]
6 2( 6)( 4) 4
2 4
= =
I - H 0=
2 10 20 0 + =
1 25 5 5 5 = + =
Since both the eigen values are positive, X2 is a local minimum, and the function is convex at
this point as both the eigen values are positive.
D Nagesh Kumar, IISc, Bangalore M2L2
-
7/28/2019 Optimization Methods (Lecture India)
51/267
Optimization Methods: Optimization using Calculus - Unconstrained Optimization 1
Module 2 Lecture Notes 3
Optimization of Functions of Multiple Variables: Unconstrained Optimization
Introduction
In the previous lectures we learnt how to determine the convexity and concavity of functions
of single and two variables. For functions of single and two variables we also learnt
determining stationary points and examining higher derivatives to check for convexity and
concavity, and tests were recommended to evaluate stationary points as local minima, local
maxima or points of inflection.
In this lecture functions of multiple variables, which are more difficult to be analyzed owing
to the difficulty in graphical representation and tedious calculations involved in mathematical
analysis, will be studied for unconstrained optimization. This is done with the aid of the
gradient vector and the Hessian matrix. Examples are discussed to show the implementation
of the technique.
Unconstrained optimization
If a convex function is to be minimized, the stationary point is the global minimum and
analysis is relatively straightforward as discussed earlier. A similar situation exists for
maximizing a concave variable function. The necessary and sufficient conditions for the
optimization of unconstrained function of several variables are given below.
Necessary condition
In case of multivariable functions a necessary condition for a stationary point of the function
f(X) is that each partial derivative is equal to zero. In other words, each element of the
gradient vector defined below must be equal to zero.
i.e. the gradient vector off(X),x
f at X=X*, defined as follows, must be equal to zero:
1
2
( *)
( *)
( *)
x
n
f
x
f
xf
f
dx
=
M
M
= 0
D Nagesh Kumar, IISc, Bangalore M2L3
-
7/28/2019 Optimization Methods (Lecture India)
52/267
Optimization Methods: Optimization using Calculus - Unconstrained Optimization 2
The proof given for the theorem on necessary condition for single variable optimization can
be easily extended to prove the present condition.
Sufficient condition
For a stationary point X* to be an extreme point, the matrix of second partial derivatives
(Hessian matrix) off(X) evaluated at X* must be:
(i) positive definite when X* is a point of relative minimum, and(ii) negative definite when X* is a relative maximum point.
Proof(Formulation of the Hessian matrix)
The Taylors theorem with reminder after two terms gives us
2
1 1 1*
1( * ) ( *) ( *)
2!
n n n
i i j
i i ji ih
df ff h f h h h
dx x xj
= = == +
+ = + +
0
-
7/28/2019 Optimization Methods (Lecture India)
53/267
Optimization Methods: Optimization using Calculus - Unconstrained Optimization 3
Q will be positive for all h if and only ifH is positive definite at X=X*. i.e. the sufficient
condition forX* to be a relative minimum is that the Hessian matrix evaluated at the same
point is positive definite, which completes the proof for the minimization case. In a similar
manner, it can be proved that the Hessian matrix will be negative definite if X* is a point of
relative maximum.
A matrix A will be positive definite if all its eigenvalues are positive. i.e. all values of that
satisfy the equation
0 =A I
should be positive. Similarly, the matrix A will be negative definite if its eigenvalues are
negative.When some eigenvalues are positive and some are negative then matrix A is neither
positive definite or negative definite.When all eigenvalues are negative for all possible values ofX, then X* is a global maximum,
and when all eigenvalues are positive for all possible values of X, then X* is a global
minimum.
If some of the eigenvalues of the Hessian at X* are positive and some negative, or if some are
zero, the stationary point, X*, is neither a local maximum nor a local minimum.
Example
Analyze the function and classify the
stationary points as maxima, minima and points of inflection.
2 2 2
1 2 3 1 2 1 3 1 3( ) 2 2 4 5 2f x x x x x x x x x x= + + + +
Solution
11 2 3
2 1
2
3 1
3
( *)
2 2 2 4 0
( *) 2 2 0
2 2 5 0( *)
x
f
xx x x
ff x x
x
x xf
x
+ + + = = + =
+
Solving these simultaneous equations we get X*=[1/2, 1/2, -2]
2 2 2
2 2 2
1 2 3
2 2
1 2 2 1
2; 2; 2
2
f f f
x x x
f f
x x x x
= = =
= =
D Nagesh Kumar, IISc, Bangalore M2L3
-
7/28/2019 Optimization Methods (Lecture India)
54/267
Optimization Methods: Optimization using Calculus - Unconstrained Optimization 4
2 2
2 3 3 2
2 2
3 1 1 3
0
2
f f
x x x x
f f
x x x x
= =
= =
Hessian off(X) is
2
i j
f
x x
=
H
2 2 2
2 2 0
2 0 2
=
H
2 2 2
2 2 0
2 0 2
+
0= + =
+
I - H
or ( 2)( 2)( 2) 2( 2)(2) 2(2)( 2) 0 + + + + + + =
2( 2)[ 4 4 4 4] + + + + = 0
0
3( 2)+ =
or 1 2 3 2 = = =
Since all eigenvalues are negative the function attains a maximum at the point
X*=[1/2, 1/2, -2]
D Nagesh Kumar, IISc, Bangalore M2L3
-
7/28/2019 Optimization Methods (Lecture India)
55/267
Optimization Methods: Optimization using Calculus - Equality constraints 1
Module 2 Lecture Notes 4
Optimization of Functions of Multiple Variables subject to Equality Constraints
Introduction
In the previous lecture we learnt the optimization of functions of multiple variables studied
for unconstrained optimization. This is done with the aid of the gradient vector and the
He