economic dispatch and introduction to optimisation
TRANSCRIPT
© Daniel KirschenThe University of Manchester, 2004 1
Economic Dispatchand
Introductionto Optimisation
Daniel Kirschen
Input Output Characteristic
• Running costs• Input / Output curve• Fuel vs. electric power• Fuel consumption measured by its energy content
B T G
Input
Electric PowerFuel
Output Output
Pmin Pmax
Inpu
t
J/h
MW
© Daniel KirschenThe University of Manchester, 2004 2
Cost Curve
• Multiply fuel input by fuel cost• No-load cost• Minimum generation• Maximum generation
Output
Pmin Pmax
Cost
£/h
MWNo-load cost
• Incremental cost curve
• Derivative of cost curve
• In £/MWh
• Cost of the next MWh
Incremental Cost Curve
∆ FuelCost∆ Power vs Power
∆F∆P
Cost [£/h]
MW
Incremental Cost [£/MWh]
MW
© Daniel KirschenThe University of Manchester, 2004 3
Piecewise Linear Approximations
• Piecewise-linear costcurve
• Piecewise-constantincremental cost curve
Incremental Cost [£/MWh]
MW
Fuel Cost [£/h]
Breakpoints
MW
Problem Formulation
• Objective function
• Constraintsn Load / Generation balance:
n Unit Constraints:
A B C L
�
C = CA (PA ) + CB (PB ) + CC (PC )
�
L = PA + PB + PC
�
PAmin ≤ PA ≤ PA
max
PBmin ≤ PB ≤ PB
max
PCmin ≤ PC ≤ PC
max
© Daniel KirschenThe University of Manchester, 2004 4
Introduction toOptimisation
“An Engineer is someone who can do for onedollar what any fool can do for two”
© Daniel KirschenThe University of Manchester, 2004 5
Objective
• Achieving the best possible design oroperating conditions requires a way ofmeasuring the goodness of this result
• We will call this measure our objective F
• Examples:n Minimise cost of building a transformer
n Minimise cost of supplying power
n Minimise losses in a power system
n Maximise profit from a bidding strategy
Decision Variables
• The value of the objective is a function ofsome decision variables:
• Examples of decision variables:n Dimensions of the transformer
n Output of generating units, position of taps
n Parameters of bids for selling electrical energy
�
F = f x 1 ,x 2 ,x 3 ,Kx n( )
© Daniel KirschenThe University of Manchester, 2004 6
Optimisation Problem
• What value should the decision variables takeso that is minimum ormaximum?
�
F = f x 1 ,x 2 ,x 3 ,Kx n( )
Example: function of one variable
x
f(x)
x*
f(x*)
f(x) is maximum for x = x*
© Daniel KirschenThe University of Manchester, 2004 7
Minimisation and Maximisation
x
f(x)
x*
f(x*)
If x = x* maximises f(x) then it minimises - f(x)
-f(x)-f(x*)
Minimisation and Maximisation
• Maximising f(x) is thus the same thing asminimising g(x) = -f(x)
• Minimisation and maximisation problems arethus interchangeable
• Depending on the problem, the optimum iseither a maximum or a minimum
© Daniel KirschenThe University of Manchester, 2004 8
Necessary Condition for Optimality
x
f(x)
x*
f(x*)
�
If x = x * maximises f ( x ) then:
f ( x ) < f ( x * ) for x < x * ⇒dfdx
> 0 for x < x *
f ( x ) < f ( x * ) for x > x * ⇒dfdx
< 0 for x > x *
�
dfdx
< 0
�
dfdx
> 0
Necessary Condition for Optimality
x
f(x)
x*
�
If x = x * maximises f ( x ) then dfdx
= 0 for x = x *
�
dfdx
= 0
© Daniel KirschenThe University of Manchester, 2004 9
Example
x
f(x)
�
dfdx
= 0For what values of x is ?
In other words, for what values of x is the necessary condition foroptimality satisfied?
Example
x
f(x)
A B C D
• A and D are maxima
• B is a minimum
• C is an inflexion point
© Daniel KirschenThe University of Manchester, 2004 10
How can we distinguish minimaand maxima?
x
f(x)
A B C D
�
For x = A and x = D, we have: d 2 fdx 2 < 0
The objective function is concave around a maximum
How can we distinguish minimaand maxima?
x
f(x)
A B C D
�
For x = B we have: d 2 fdx 2 > 0
The objective function is convex around a minimum
© Daniel KirschenThe University of Manchester, 2004 11
How can we distinguish minimaand maxima?
x
f(x)
A B C D
�
For x = C , we have: d 2 fdx 2 = 0
The objective function is flat around an inflexion point
Necessary and SufficientConditions of Optimality
• Necessary condition:
• Sufficient condition:n For a maximum:
n For a minimum:
�
dfdx
= 0
�
d 2 fdx 2 > 0
�
d 2 fdx 2 < 0
© Daniel KirschenThe University of Manchester, 2004 12
Isn’t all this obvious?
• Can’t we tell all this by looking at the objectivefunction?
n Yes, for a simple, one-dimensional case when weknow the shape of the objective function
n For complex, multi-dimensional cases (i.e. withmany decision variables) we can’t visualise theshape of the objective function
n We must then rely on mathematical techniques
Feasible Set
• The values that the decision variables cantake are usually limited
• Examples:n Physical dimensions of a transformer must be
positive
n Active power output of a generator may be limitedto a certain range (e.g. 200 MW to 500 MW)
n Reactive power output of a generator may belimited to a certain range (e.g. -100 MVAr to 150MVAr)
© Daniel KirschenThe University of Manchester, 2004 13
Feasible Set
x
f(x)
A D xMAXxMIN
Feasible Set
The values of the objective function outside the feasible set do not matter
x
f(x)
A D xMAXxMIN B E
Interior and Boundary Solutions
• A and D are interior maxima
• B and E are interior minima
• XMIN is a boundary minimum
• XMAX is a boundary maximumDo not satisfy theOptimality conditions!
© Daniel KirschenThe University of Manchester, 2004 14
Two-Dimensional Case
x1
x2
f(x1,x2)
x2*
x1*
f(x1,x2) is minimum for x1*, x2
*
Necessary Conditions for Optimality
x1
x2
f(x1,x2)
x2*
x1*
�
∂f (x1 ,x 2 )∂x1 x1
* ,x2*
= 0
∂f (x1 ,x 2 )∂x 2 x
1* ,x
2*
= 0
© Daniel KirschenThe University of Manchester, 2004 15
Multi-Dimensional Case
At a maximum or minimum value of
�
f x 1 , x 2 , x 3 ,Kx n( )we must have:
�
∂f∂x 1
= 0
∂f∂x 2
= 0
M
∂f∂x n
= 0
A point where these conditions are satisfied is called a stationary point
Sufficient Conditions for Optimality
x1
x2
f(x1,x2) minimum maximum
© Daniel KirschenThe University of Manchester, 2004 16
Sufficient Conditions for Optimality
x1
x2
f(x1,x2)
Saddle point
Sufficient Conditions for Optimality
�
∂ 2 f∂x 1
2
∂ 2 f∂x 1∂x 2
L∂ 2 f
∂x 1∂x n
∂ 2 f∂x 2 ∂x 1
∂ 2 f∂x 2
2L
∂ 2 f∂x 2 ∂x n
M M O M∂ 2 f
∂x n ∂x1
∂ 2 f∂x n ∂x 2
L∂ 2 f∂x n
2
Calculate the Hessian matrix at the stationary point:
© Daniel KirschenThe University of Manchester, 2004 17
Sufficient Conditions for Optimality
• Calculate the eigenvalues of the Hessian matrixat the stationary point
• If all the eigenvalues are greater or equal to zero:n The matrix is positive semi-definite
n The stationary point is a minimum
• If all the eigenvalues are less or equal to zero:n The matrix is negative semi-definite
n The stationary point is a maximum
• If some or the eigenvalues are positive and otherare negative:
n The stationary point is a saddle point
Contours
x1
x2
f(x1,x2)
F1 F2
F2
F1
© Daniel KirschenThe University of Manchester, 2004 18
Contours
x1
x2
Minimum or maximum
A contour is the locus of all the point that give the same valueto the objective function
Example 1
�
Minimise C = x 12 + 4 x 2
2 − 2 x 1 x 2
Necessary conditions for optimality:
∂C∂x 1
= 2 x 1 − 2 x 2 = 0
∂C∂x 2
= −2 x 1 + 8 x 2 = 0
�
x 1 = 0
x 2 = 0
is a stationarypoint
© Daniel KirschenThe University of Manchester, 2004 19
Example 1
Sufficient conditions for optimality:
�
Hessian Matrix:
∂ 2 C∂x 1
2
∂ 2 C∂x 1 ∂x 2
∂ 2 C∂x 2 ∂x 1
∂ 2 C∂x 2
2
=2 −2
−2 8
must be positive definite (i.e. all eigenvalues must be positive)
�
λ − 2 22 λ − 8
= 0 ⇒ λ 2 − 10 λ + 12 = 0
⇒ λ = 10 ± 522
≥ 0The stationary point is a minimum
Example 1
x1
x2
C=1C=4
C=9
Minimum: C=0
© Daniel KirschenThe University of Manchester, 2004 20
Example 2
�
Minimise C = − x 12 + 3 x 2
2 + 2 x1 x 2
Necessary conditions for optimality:
∂C∂x 1
= −2 x 1 + 2 x 2 = 0
∂C∂x 2
= 2 x 1 + 6 x 2 = 0
�
x 1 = 0
x 2 = 0
is a stationarypoint
Example 2Sufficient conditions for optimality:
�
Hessian Matrix:
∂ 2 C∂x 1
2
∂ 2 C∂x 1 ∂x 2
∂ 2 C∂x 2 ∂x 1
∂ 2 C∂x 2
2
=−2 22 6
�
λ + 2 −2−2 λ − 6
= 0 ⇒ λ 2 − 4 λ − 8 = 0
⇒ λ = 4 + 802
> 0
or λ = 4 − 802
< 0
The stationary point is a saddle point
© Daniel KirschenThe University of Manchester, 2004 21
Example 2
x1
x2
C=1
C=4
C=9
C=1
C=4
C=9
C=-1 C=-4 C=-9
C=0
C=0
C=-9 C=-4
Optimisation with Constraints
© Daniel KirschenThe University of Manchester, 2004 22
Optimisation with Equality Constraints
• There are usually restrictions on the valuesthat the decision variables can take
�
Minimise
f x 1 ,x 2 ,K,x n( )subject to:
ω1 x 1 ,x 2 ,K,x n( ) = 0
M
ω m x 1 ,x 2 ,K,x n( ) = 0
Objective function
Equality constraints
Number of Constraints
• N decision variables
• M equality constraints
• If M > N, the problems is over-constrainedn There is usually no solution
• If M = N, the problem is determinedn There may be a solution
• If M < N, the problem is under-constrainedn There is usually room for optimisation
© Daniel KirschenThe University of Manchester, 2004 23
Example 1
�
Minimise f x 1 ,x 2( ) = 0.25 x12 + x 2
2
Subject to ω x1 ,x 2( ) ≡ 5 − x1 − x 2 = 0
x1
x2
�
ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0
�
f x 1 ,x 2( ) = 0.25 x 12 + x 2
2
Minimum
Example 2: Economic Dispatch
LG1 G2
x1 x2
�
C1 = a1 + b1 x 12
C 2 = a2 + b2 x 22
C = C1 + C 2 = a1 + a 2 + b1 x 12 + b2 x 2
2
Cost of running unit 1
Cost of running unit 2
Total cost
Optimisation problem:
�
Minimise C = a1 + a 2 + b1 x 12 + b2 x 2
2
�
Subject to: x 1 + x 2 = L
© Daniel KirschenThe University of Manchester, 2004 24
Solution by Substitution
�
Minimise C = a1 + a 2 + b1 x 12 + b2 x 2
2
�
Subject to: x 1 + x 2 = L
�
⇒ x 2 = L − x 1
⇒ C = a1 + a 2 + b1 x 12 + b2 L − x1( ) 2
Unconstrained minimisation
�
dCdx1
= 2b1 x 1 − 2b2 L − x 1( ) = 0
⇒ x1 =b2 L
b1 + b2 ⇒ x 2 =
b1 Lb1 + b2
d 2 Cdx1
2= 2b1 + 2b2 > 0 ⇒ minimum
Solution by Substitution
• Difficult
• Sometimes impossible when constraints arenon-linear
• Provides little or no insight into solution
Ÿ Solution using Lagrange multipliers
© Daniel KirschenThe University of Manchester, 2004 25
Gradient
�
Consider a function f ( x 1 , x 2 ,K,x n )
The gradient of f is the vector ∇f =
∂f∂x1
∂f∂x 2
M∂f
∂x n
Properties of the Gradient
• Each component of the gradient vectorindicates the rate of change of the function inthat direction
• The gradient indicates the direction in which afunction of several variables increases mostrapidly
• The magnitude and direction of the gradientusually depend on the point considered
• At each point, the gradient is perpendicular tothe contour of the function
© Daniel KirschenThe University of Manchester, 2004 26
Example 3
�
f ( x ,y ) = ax 2 + by 2
∇f =
∂f∂x∂f∂y
=2 ax2 by
x
y
Example 4
�
f ( x ,y ) = ax + by
∇f =
∂f∂x∂f∂y
=ab
x
y
�
f = f 1
�
f = f 2
�
f = f 3
�
∇f
�
∇f
�
∇f
© Daniel KirschenThe University of Manchester, 2004 27
Lagrange Multipliers
�
f = 0.25 x 12 + x 2
2 = 6
�
ω x 1 ,x 2( ) = 5 − x 1 − x 2
�
Minimise f x 1 ,x 2( ) = 0.25 x12 + x 2
2 subject to ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0
�
f = 0.25 x 12 + x 2
2 = 5
Lagrange Multipliers
�
f x 1 ,x 2( ) = 6
�
ω x 1 ,x 2( )
�
f x 1 ,x 2( ) = 5
�
∇f =
∂f∂x1
∂f∂x 2
�
∇f
�
∇f
�
∇f
© Daniel KirschenThe University of Manchester, 2004 28
Lagrange Multipliers
�
f x 1 ,x 2( ) = 6
�
ω x 1 ,x 2( )
�
f x 1 ,x 2( ) = 5
�
∇ω =
∂ω∂x1
∂ω∂x 2
�
∇ω
�
∇ω
�
∇ω
Lagrange Multipliers
�
f x 1 ,x 2( ) = 6
�
ω x 1 ,x 2( )
�
f x 1 ,x 2( ) = 5
• The solution must be on the constraint• To reduce the value of f, we must move in a direction opposite to the gradient
�
∇f
�
∇f
�
∇f
© Daniel KirschenThe University of Manchester, 2004 29
Lagrange Multipliers
�
f x 1 ,x 2( ) = 6
�
ω x 1 ,x 2( )
�
f x 1 ,x 2( ) = 5
• We stop when the gradient of the function is perpendicular to the constraint because moving further would increase the value of the function
At the optimum, the gradient of the function is parallel to the gradient of the constraint
�
∇ω
�
∇ω
�
∇ω
�
∇f
�
∇f
�
∇f
Lagrange Multipliers
At the optimum, we must have:
�
∇f ∇ω
Which can be expressed as:
�
∇f + λ ∇ω = 0
�
λ is called the Lagrange multiplier
The constraint must also be satisfied:
�
ω x 1 ,x 2( ) = 0
In terms of the co-ordinates:
�
∂f∂x 1
+ λ ∂ω∂x 1
= 0
∂f∂x 2
+ λ ∂ω∂x 2
= 0
© Daniel KirschenThe University of Manchester, 2004 30
Lagrangian Function
To simplify the writing of the conditions for optimality,it is useful to define the Lagrangian function:
�
l x 1 ,x 2 ,λ( ) = f x 1 ,x 2( ) + λω x 1 ,x 2( )
The necessary conditions for optimality are then given by the partial derivatives of the Lagrangian:
�
∂l x 1 ,x 2 ,λ( )∂x1
=∂f
∂x1+ λ
∂ω∂x 1
= 0
∂l x 1 ,x 2 ,λ( )∂x 2
=∂f
∂x 2+ λ
∂ω∂x 2
= 0
∂l x 1 ,x 2 ,λ( )∂λ
= ω x 1 ,x 2( ) = 0
Example
�
Minimise f x 1 ,x 2( ) = 0.25 x12 + x 2
2 subject to ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0
�
l x 1 ,x 2 ,λ( ) = 0.25 x12 + x 2
2 + λ 5 − x 1 − x 2( )
∂l x 1 ,x 2 ,λ( )∂x1
≡ 0.5 x 1 − λ = 0
∂l x 1 ,x 2 ,λ( )∂x 2
≡ 2 x 2 − λ = 0
∂l x 1 ,x 2 ,λ( )∂λ
≡ 5 − x1 − x 2 = 0
© Daniel KirschenThe University of Manchester, 2004 31
Example
�
∂l x 1 ,x 2 ,λ( )∂x1
≡ 0.5 x 1 − λ = 0 ⇒ x 1 = 2 λ
∂l x 1 ,x 2 ,λ( )∂x 2
≡ 2 x 2 − λ = 0 ⇒ x 2 =12
λ
∂l x 1 ,x 2 ,λ( )∂λ
≡ 5 − x1 − x 2 = 0 ⇒ 5 − 2 λ −12
λ = 0
�
⇒ λ = 2
⇒ x1 = 4
⇒ x 2 = 1
Example
�
Minimise f x 1 ,x 2( ) = 0.25 x12 + x 2
2
Subject to ω x1 ,x 2( ) ≡ 5 − x1 − x 2 = 0
x1
x2
�
ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0
Minimum
4
1
�
f x 1 , x 2( ) = 5
© Daniel KirschenThe University of Manchester, 2004 32
Important Note!
If the constraint is of the form:
It must be included in the Lagrangian as follows:
And not as follows:
�
ax1 + bx 2 = L
�
l = f x 1 ,K,x n( ) + λ L − ax1 − bx 2( )
�
l = f x 1 ,K,x n( ) + λ ax1 + bx 2( )
Application to Economic Dispatch
LG1 G2
x1 x2
�
minimise f x 1 ,x 2( ) = C1 x 1( ) + C 2 x 2( )s.t . ω x 1 ,x 2( ) ≡ L − x 1 − x 2 = 0
�
l x 1 , x 2 ,λ( ) = C1 x 1( ) + C 2 x 2( ) + λ L − x1 − x 2( )
�
∂l∂x 1
≡dC1
dx 1− λ = 0
∂l∂x 2
≡dC 2
dx 2− λ = 0
∂l∂λ
≡ L − x 1 − x 2 = 0
�
dC1
dx1=
dC 2
dx 2= λ
Equal incremental costsolution
© Daniel KirschenThe University of Manchester, 2004 33
Incremental Cost
x1
�
C1 ( x 1 )
x2
x1
�
dC1
dx1
x2
�
dC 2
dx 2
�
C 2 ( x 2 )Cost curves:
Incrementalcost curves:
Interpretation of this Solution
�
L − x1 − x 2
x1
�
dC1
dx1
x2
�
dC 2
dx 2
�
λ
L +-
-
If < 0, reduce λIf > 0, increase λ
© Daniel KirschenThe University of Manchester, 2004 34
Physical Interpretation
x
x
�
C ( x )
dC(x)dx
�
∆C
�
∆x
�
dCdx
= lim∆x→0
∆C∆x
�
For ∆x sufficiently small:
∆C ≈dCdx
× ∆x
If ∆x = 1MW :
∆C ≈ dCdx
The incremental cost is the cost ofone additional MW for one hour. This cost depends on the output of the generator.
Physical Interpretation
�
dC1
dx1: Cost of one more MW from unit 1
dC 2
dx 2: Cost of one more MW from unit 2
Suppose that dC1
dx 1>
dC 2
dx 2
Decrease output of unit 1 by 1MW ⇒ decrease in cost =dC1
dx 1
Increase output of unit 2 by 1MW ⇒ increase in cost =dC 2
dx 2
Net change in cost =dC 2
dx 2−
dC1
dx1< 0
© Daniel KirschenThe University of Manchester, 2004 35
Physical Interpretation
�
dC1
dx1=
dC 2
dx 2= λ
It pays to increase the output of unit 2 and decrease the output of unit 1 until we have:
The Lagrange multiplier λ is thus the cost of one more MWat the optimal solution.
This is a very important result with many applications in Economics.
Generalisation
�
Minimise
f x 1 ,x 2 ,K,x n( )subject to:
ω1 x 1 ,x 2 ,K,x n( ) = 0
M
ω m x 1 ,x 2 ,K,x n( ) = 0
Lagrangian:
�
l = f x 1 ,K, x n( ) + λ 1ω 1 x 1 ,K,x n( ) + L + λ m ω m x1 ,K,x n( )
• One Lagrange multiplier for each constraint• n + m variables: x1, …, xn and λ1, …, λm
© Daniel KirschenThe University of Manchester, 2004 36
Optimality Conditions
�
l = f x 1 ,K, x n( ) + λ 1ω 1 x 1 ,K,x n( ) + L + λ m ω m x1 ,K,x n( )
�
∂l∂x 1
=∂f
∂x 1+ λ 1
∂ω 1
∂x 1+ L + λ m
∂ω m
∂x 1= 0
M
∂l∂x n
=∂f
∂x n
+ λ 1∂ω 1
∂x n
+ L + λ m
∂ω m
∂x n
= 0
∂l∂λ 1
= ω 1 x 1 ,L,x n( ) = 0
M
∂l∂λ m
= ω m x 1 ,L,x n( ) = 0
n equations
m equations
n+m equations inn + m variables
Optimisation with Inequality Constraints
�
Minimise
f x 1 ,x 2 ,K, x n( )subject to:
ω1 x 1 ,x 2 ,K,x n( ) = 0
M
ω m x 1 ,x 2 ,K,x n( ) = 0
and:
g1 x 1 , x 2 ,K, x n( ) ≤ 0
M
gp x 1 ,x 2 ,K,x n( ) ≤ 0
Objective function
Equality constraints
Inequality constraints
© Daniel KirschenThe University of Manchester, 2004 37
Example: Economic Dispatch
LG1 G2
x1 x2
�
Minimise C = a1 + b1 x 12 + a 2 + b2 x 2
2
�
Subject to:
x1 + x 2 = L
x1 − x 1max ≤ 0
x1min − x 1 ≤ 0
x 2 − x 2max ≤ 0
x 2min − x 2 ≤ 0
�
x1min ≤ x 1 ≤ x 1
max
x 2min ≤ x 2 ≤ x 2
max
Inequality constraints
Equality constraints
Example: Economic Dispatch
x1
x2
�
Minimise C = a1 + b1 x 12 + a 2 + b2 x 2
2 Family of ellipses
�
x1 + x 2 = L
x1maxx1
min
x2min
x2max
A
Ellipse tangent to equality constraint at AInequality constraints are satisfied
© Daniel KirschenThe University of Manchester, 2004 38
Example: Economic Dispatch
x1
x2
�
x1 + x 2 = L
x1maxx1
min
x2min
x2max
A
Ellipse tangent to equality constraint at BInequality constraints are NOT satisfied!
What is the solution for a larger load?
�
x1 + x 2 = L'
B
Example: Economic Dispatch
x1
x2
�
x1 + x 2 = L
x1maxx1
min
x2min
x2max
A
C is the solution because it is the point onthe equality constraint that satisfies the inequality constraints at minimum cost
�
x1 + x 2 = L'
B
C
© Daniel KirschenThe University of Manchester, 2004 39
Binding Inequality Constraints
• A binding inequality constraint is an equality constraintthat is satisfied exactly
• Example:n If we must have x1 ≤ x1
max
n And at the solution we have x1 = x1max
n Then the constraint x1 ≤ x1max is said to be binding
• ALL of the inequality constraints must be satisfied
• Only a FEW will be binding (or active) at any given time
• But we don’t know ahead of time which inequalityconstraints will be binding!
• All equality constraints are always binding
Solution using Lagrange Multipliers
�
Minimise f x 1 ,x 2 ,K,x n( )subject to:
ω i x 1 ,x 2 ,K, x n( ) = 0 i = 1,...m
g j x 1 , x 2 ,K, x n( ) ≤ 0 j = 1,... p
�
l x 1 ,K,x n ,λ 1 ,...,λ m ,µ 1 ,...,µ p( ) = f x 1 ,K,x n( )
+ λ i ω i x 1 ,K,x n( )i=1
m
∑
+ µ j g j x1 ,K,x n( )j =1
p
∑
Lagrangian function:
© Daniel KirschenThe University of Manchester, 2004 40
Complementary Slackness Conditions
�
∂l x ,λ ,µ( )∂x i
≡∂f x( )
∂x i
+ λ k
∂ω k x( )∂x ik =1
m
∑ + µ j
∂g j x( )∂x i
= 0j =1
p
∑ i = 1,...n
∂l x ,λ ,µ( )∂λ k
≡ ω k x( ) = 0 k = 1,...m
g j x( ) ≤ 0 j = 1,... p
µ j g j x( ) = 0 j = 1,... p
µ j ≥ 0 j = 1,... p
Optimality Conditions
(Known as the Karam Kuhn Tucker (KKT) conditions
�
l x ,λ ,µ( ) = f x( ) + λ i ω i x( )i=1
m
∑ + µ j g j x( )j =1
p
∑
Complementary Slackness Conditions
�
µ j g j x( ) = 0
µ j ≥ 0
Two possibilities for each constraint j:
�
µ j = 0 then the constraint g j ( x ) is non - binding ⇒ g j x( ) < 0
�
g j x( ) = 0 then the constraint g j ( x ) is binding ⇒ µ j > 0
OR
© Daniel KirschenThe University of Manchester, 2004 41
Warning!
• Difficulty with the complementary slacknessconditions
n They tell us that an inequality constraint is eitherbinding or non-binding
n They DON’T tell us which constraints are binding andnon-binding
n The binding constraints have to be identified throughtrial and error
Example
�
Minimise
f x 1 ,x 2( ) = 0.25 x 12 + x 2
2
Subject to:
ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0
g x 1 ,x 2( ) ≡ x 1 + 0.2 x 2 − 3 ≤ 0
© Daniel KirschenThe University of Manchester, 2004 42
Example
�
ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0
�
g x 1 ,x 2( ) ≡ x 1 + 0.2 x 2 − 3 ≤ 0
x1
x2
�
f x 1 , x 2( ) = 0.25 x 12 + x 2
2
Example
�
l x 1 ,x 2 ,λ ,µ( ) = f x 1 ,x 2( ) + λω x 1 ,x 2( ) + µg x 1 ,x 2( )= 0.25 x 1
2 + x 22 + λ 5 − x 1 − x 2( ) + µ x1 + 0.2 x 2 − 3( )
∂l∂x1
≡ 0.5x1 − λ + µ = 0
∂l∂x2
≡ 2x2 − λ + 0.2µ = 0
∂l∂λ
≡ 5 − x1 − x2 = 0
∂l∂µ
≡ x1 + 0.2x2 − 3 ≤ 0
µg(x) ≡ µ x1 + 0.2x2 − 3( ) = 0 and µ ≥ 0
© Daniel KirschenThe University of Manchester, 2004 43
Example
KKT conditions do not tell us if inequality constraint is bindingMust use a trial and error approach
Trial 1: Assume inequality constraint is not binding ⇒µ=0
From solution of example without inequality constraint, weknow that the solution is then:
�
x1 = 4 ;x 2 = 1
but this means that:
x1 + 0.2 x 2 − 3 = 1.2 ≥ 0
This solution is thus not an acceptable solution
Example
Trial 2: Assume that the inequality constraint is binding
�
∂l∂λ
≡ 5 − x 1 − x 2 = 0
∂l∂µ
≡ x 1 + 0.2 x 2 − 3 = 0
∂l∂x 1
≡ 0.5 x 1 − λ + µ = 0
∂l∂x 2
≡ 2 x 2 − λ + 0.2µ = 0
�
x1 = 2.5
x 2 = 2.5
�
λ = 5.9375
µ = 4.6875
All KKT conditions are satisfied. This solution is acceptable
© Daniel KirschenThe University of Manchester, 2004 44
Example: Graphical Solution
�
ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0
�
g x 1 ,x 2( ) ≡ x 1 + 0.2 x 2 − 3 ≤ 0
x1
x2
�
f x 1 , x 2( ) = 0.25 x 12 + x 2
2
Solution of problemwith inequality constraint
Solution of problemwithout inequality constraint
Solution of problemwithout constraints
Application to Economic Dispatch
LG1 G2
x1 x2
�
x1min ≤ x 1 ≤ x 1
max
�
minimise f x 1 , x 2( ) = C1 x 1( ) + C 2 x 2( )s.t . ω x 1 ,x 2( ) ≡ L − x 1 − x 2 = 0
�
x 2min ≤ x 2 ≤ x 2
max
�
g1 x 1 , x 2( ) ≡ x 1 − x1max ≤ 0
g2 x 1 ,x 2( ) ≡ x 1min − x1 ≤ 0
�
g3 x 1 ,x 2( ) ≡ x 2 − x 2max ≤ 0
g4 x 1 ,x 2( ) ≡ x 2min − x 2 ≤ 0
© Daniel KirschenThe University of Manchester, 2004 45
Application to Economic Dispatch
�
l x 1 ,x 2 ,λ ,µ 1 ,µ 2 ,µ 3 ,µ 4( ) = C1 x 1( ) + C 2 x 2( ) + λ L − x 1 − x 2( )+ µ 1 x1 − x 1
max( ) + µ 2 x1min − x 1( )
+ µ 3 x 2 − x 2max( ) + µ 4 x 2
min − x 2( )KKT Conditions:
�
∂l∂x 1
≡dC1
dx 1− λ + µ 1 − µ 2 = 0
∂l∂x 2
≡dC 2
dx 2− λ + µ 3 − µ 4 = 0
∂l∂λ
≡ L − x 1 − x 2 = 0
Application to Economic Dispatch
KKT Conditions (continued):
�
∂l∂µ 1
≡ x 1 − x 1max ≤ 0
�
µ1 ( x 1 − x1max ) = 0 ; µ1 ≥ 0
�
∂l∂µ 2
≡ x 1min − x 1 ≤ 0
�
∂l∂µ 3
≡ x 2 − x 2max ≤ 0
�
∂l∂µ 4
≡ x 2min − x 2 ≤ 0
�
µ 2 ( x 1min − x1 ) = 0 ; µ 2 ≥ 0
�
µ 3 ( x 2 − x 2max ) = 0 ; µ 3 ≥ 0
�
µ 4 ( x 2min − x 2 ) = 0 ; µ 4 ≥ 0
© Daniel KirschenThe University of Manchester, 2004 46
Solving the KKT Equations
Trial #1: No generator is at a limit
No inequality constraint is binding ⇒ all µ’s are equal to zero
�
∂l∂x 1
≡dC1
dx 1− λ = 0
∂l∂x 2
≡dC 2
dx 2− λ = 0
∂l∂λ
≡ L − x 1 − x 2 = 0
�
dC1
dx1=
dC 2
dx 2= λ
All generators operate at the same incremental cost
Solving the KKT Equations
Trial #2: Generator 1 is at its upper limit; Other limits not binding
�
∂l∂x 1
≡dC1
dx 1− λ + µ 1 = 0
∂l∂x 2
≡dC 2
dx 2− λ = 0
All generators do NOT operate at the same incremental cost!
The incremental cost of unit 1 is lower
If that was possible, more power would be produced by unit 1
x1 − x1max = 0 ⇒ µ1 ≥ 0; µ2 = µ3 = µ4 = 0
�
dC1
dx1= λ − µ1 ≤ λ
dC 2
dx 2= λ
© Daniel KirschenThe University of Manchester, 2004 47
Solving the KKT Equations
Trial #3: Generator 1 is at its lower limit; other limits not binding
�
∂l∂x 1
≡dC1
dx 1− λ − µ 2 = 0
∂l∂x 2
≡dC 2
dx 2− λ = 0
Again, all generators do NOT operate at the same incremental cost!
The incremental cost of unit 1 is higher
If that was possible, less power would be produced by unit 1
x1min − x1 = 0 ⇒ µ2 ≥ 0; µ1 = µ3 = µ4 = 0
�
dC1
dx1= λ + µ 2 ≥ λ
dC 2
dx 2= λ
Physical Interpretation of Lagrange Multipliers
�
Minimise C ( x )
subject to: ω x( ) = L ⇔ L − ω x( ) = 0
and: g ( x ) ≥ K ⇔ K − g ( x ) ≤ 0
l x ,λ ,µ( ) = C ( x ) + λ L − ω x( )( ) + µ K − g ( x )( )
�
At the optimum x * ,λ * ,µ * : = 0 = 0
�
l x * ,λ * ,µ *( ) = C ( x * )
�
∂l∂L Optimum
= λ *
∂l∂K Optimum
= µ *
Marginal cost of equality constraint
Marginal cost of inequality constraint
© Daniel KirschenThe University of Manchester, 2004 48
Physical Interpretation of Lagrange Multipliers
• Constraints always increase the cost of asolution
• The Lagrange multipliers give theincremental cost of the binding constraintsat the optimum
• They are sometimes called shadow costs
• Non-binding inequality constraints have azero incremental cost
Practical Economic Dispatch
© Daniel KirschenThe University of Manchester, 2004 49
Equal Incremental Cost Dispatch
PA
�
dC A
dPA
PB
�
dC B
dPB
PC
�
dCC
dPC
+
�
PA + PB + PC
�
λ
Implementation: Lambda SearchAlgorithm
�
1. Choose a starting value for λ
2. Calculate PA,PB,PC such that ∂CA (PA )∂PA
= ∂CB (PB )∂PB
= ∂CC (PC )∂PC
= λ
3. If one of these values exceeds its lower or upper limit, fix it at that limit
4. Calculate PTOTAL = PA + PB + PC
5. If PTOTAL < L, then increase λ Else If PTOTAL > L, then decrease λ Else If PTOTAL ≈ L, then exit
6. Go To Step 2
© Daniel KirschenThe University of Manchester, 2004 50
Piecewise Linear Cost Curves
PA
�
dC A
dPA
PB
�
dC B
dPB
PA
�
CA
PB
�
CB
�
λ
Economic Dispatch with PiecewiseLinear Cost Curves
PA
�
dC A
dPA
PB
�
dC B
dPB
PA
�
CA
PB
�
CB
© Daniel KirschenThe University of Manchester, 2004 51
�
λ
Economic Dispatch with PiecewiseLinear Cost Curves
PA
�
dC A
dPA
PB
�
dC B
dPB
PA
�
CA
PB
�
CB
�
λ
Economic Dispatch with PiecewiseLinear Cost Curves
PA
�
dC A
dPA
PB
�
dC B
dPB
PA
�
CA
PB
�
CB
© Daniel KirschenThe University of Manchester, 2004 52
�
λ
Economic Dispatch with PiecewiseLinear Cost Curves
PA
�
dC A
dPA
PB
�
dC B
dPB
PA
�
CA
PB
�
CB
�
λ
Economic Dispatch with PiecewiseLinear Cost Curves
PA
�
dC A
dPA
PB
�
dC B
dPB
PA
�
CA
PB
�
CB
© Daniel KirschenThe University of Manchester, 2004 53
Economic Dispatch with PiecewiseLinear Cost Curves
• All generators except one are at breakpoints of theircost curve
• The marginal generator is between breakpoints tobalance the load and generation
• Not well-suited for lambda-search algorithm• Very fast table lookup algorithm:
n Rank all the segments of the piecewise linear costcurves in order of incremental cost
n First dispatch all the units at their minimumgeneration
n Then go down the table until the generationmatches the load
Example
PA
�
dC A
dPA
PB
�
dC B
dPB
20
30
70
80
110
120
140
150
0.1
0.6
0.8
0.30.5
0.7
Unit PSegment Ptotal Lambda
A&B min 20+30 = 50 50
B 70-20=50 100 0.1
A 80-30=50 150 0.3
A 120-80=40 190 0.5
B 110-70=40 230 0.6
A 150-120=30 260 0.7
B 140-110=30 290 0.8
If Pload = 210MW
The optimal economic dispatch is:
PA = 120MW
PB = 90 MW
Lambda = 0.6
© Daniel KirschenThe University of Manchester, 2004 54
Network Considerations
• Assumed so far that all generators and loadsare located on the same bus
• Ignored network effectsn Losses
n Transmission constraints
A B L
A more realistic situation...
• Generator A is cheaper but supplying theload from A causes more losses in thetransmission system
• Need to take the losses into considerationwhen doing the optimisation
A B
Cheap generator More expensive generator
© Daniel KirschenThe University of Manchester, 2004 55
Economic Dispatch withTransmission Losses
• Lagrangian Function:
• Conditions for Optimality:
�
L = CA (PA ) + CB (PB ) + λ L + PL − PA − PB( )
�
∂L∂PA
≡ dCAdPA
− λ 1 − ∂PL∂PA
= 0
∂L∂PB
≡ dCBdPB
− λ 1 − ∂PL∂PB
= 0
∂L∂λ
≡ L + PL − PA − PB = 0
�
1
1 − ∂PL∂PA
⋅ dCAdPA
= 1
1 − ∂PL∂PB
⋅ dCBdPB
= λ
Losses
Incremental Losses and PenaltyFactors
• Incremental generation costs multiplied by thepenalty factors to take losses into account
Incremental Loss for bus A
Penalty Factor for bus A
�
∂PL∂PA
1
1 − ∂PL∂PA
= PFA
PFAdCAdPA
= PFBdCBdPB
= λ
© Daniel KirschenThe University of Manchester, 2004 56
Limitations of this Approach
• To calculate the penalty factors, we need to know therelation between the losses and each generator’soutput
• This relation is complex and depends on all theinjections in the system
• Approximate formulas have been developed but needto be adjusted for each system configuration
• Does not take network constraints into account
Ë A rigorous solution requires an Optimal Power Flow(OPF)
Local and Global Optima
© Daniel KirschenThe University of Manchester, 2004 57
Which one is the real maximum?
x
f(x)
A D
�
For x = A and x = D, we have: dfdx
= 0 and d 2 fdx 2 < 0
Which one is the real optimum?
A, B,C and D!are all minima because we have: ∂f
∂x1
= 0;! ∂f∂x2
= 0 and ∂2 f∂x1
2 < 0;! ∂2 f∂x2
2 < 0;! ∂2 f∂x1 ∂x2
< 0
x1
x2
B
A
C
D
© Daniel KirschenThe University of Manchester, 2004 58
Local and Global Optima
• The optimality conditions are local conditions
• They do not compare separate optima
• If I find an optimum can I be sure that it is theglobal optimum?
• In general, to find the global optimum, wemust find and compare all the optima
• In large problems, this can be very difficultand time consuming
Convexity
• If the feasible set is convex and the objectivefunction is convex, there is only one minimumand it is thus the global minimum
© Daniel KirschenThe University of Manchester, 2004 59
Examples of Convex Feasible Sets
x1
x2
x1
x2
x1x1
x2
x1min x1
max
Example of Non-Convex Feasible Sets
x1
x2
x1
x2
x1
x2
x1x1a x1
dx1b x1
c
x1
© Daniel KirschenThe University of Manchester, 2004 60
Example of Convex Feasible Sets
x1
x2
x1
x2
x1
x2
A set is convex if, for any two points belonging to the set, all the points on the straight line joining these two points belong to the set
x1x1min x1
max
Example of Non-Convex Feasible Sets
x1
x2
x1
x2
x1
x2
x1x1a x1
dx1b x1
c
x1
© Daniel KirschenThe University of Manchester, 2004 61
Example of Convex Function
x
f(x)
Example of Convex Function
x1
x2
© Daniel KirschenThe University of Manchester, 2004 62
Example of Non-Convex Function
x
f(x)
Example of Non-Convex Function
x1
x2
B
A
C
D
© Daniel KirschenThe University of Manchester, 2004 63
Definition of a Convex Function
x
f(x)
xa xby
f(y)
z
A convex function is a function such that, for any two points xa and xbbelonging to the feasible set and any k such that 0 ≤ k ≤1, we have:
�
z = kf x a( ) + (1− k ) f x b( ) ≥ f y( ) = f kx a + 1− k( ) x b[ ]
Example of Non-Convex Function
x
f(x)
© Daniel KirschenThe University of Manchester, 2004 64
Importance of Convexity
• If we can prove that a minimisation problem isconvex:
n Convex feasible set
n Convex objective function
ËThen, the problem has one and only one solution
• Proving convexity is often difficult
• Power system problems are usually not convex
ËThere may be more than one solution to powersystem optimisation problems