economic dispatch and introduction to optimisation

64
© Daniel Kirschen The University of Manchester, 2004 1 Economic Dispatch and Introduction to Optimisation Daniel Kirschen Input Output Characteristic Running costs Input / Output curve Fuel vs. electric power Fuel consumption measured by its energy content B T G Input Electric Power Fuel Output Output P min P max Input J/h MW

Upload: vannhan

Post on 05-Jan-2017

229 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 1

Economic Dispatchand

Introductionto Optimisation

Daniel Kirschen

Input Output Characteristic

• Running costs• Input / Output curve• Fuel vs. electric power• Fuel consumption measured by its energy content

B T G

Input

Electric PowerFuel

Output Output

Pmin Pmax

Inpu

t

J/h

MW

Owner
Note
1 Joule (J) = 1 Watt-second 1054.85 J = 1 Btu
Page 2: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 2

Cost Curve

• Multiply fuel input by fuel cost• No-load cost• Minimum generation• Maximum generation

Output

Pmin Pmax

Cost

£/h

MWNo-load cost

• Incremental cost curve

• Derivative of cost curve

• In £/MWh

• Cost of the next MWh

Incremental Cost Curve

∆ FuelCost∆ Power vs Power

∆F∆P

Cost [£/h]

MW

Incremental Cost [£/MWh]

MW

Owner
Note
A quasi-fixed cost is a cost incurred by a generating unit only if the unit is running, but which is independent of the particular amount of power the running unit generates. (K/S, p. 84) Examples of quasi-fixed costs are no-load costs and start-up costs. No-Load Cost: The cost of fuel required to keep a generating unit running that is connected to the transmission grid but not supplying any electric power to the grid. (K/S p. 84) Start-Up Cost: The cost of fuel required to start up a generating unit. (K/S, p. 29)
Owner
Note
Maximum operating capacity
Owner
Note
Minimum operating capacity
Page 3: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 3

Piecewise Linear Approximations

• Piecewise-linear costcurve

• Piecewise-constantincremental cost curve

Incremental Cost [£/MWh]

MW

Fuel Cost [£/h]

Breakpoints

MW

Problem Formulation

• Objective function

• Constraintsn Load / Generation balance:

n Unit Constraints:

A B C L

C = CA (PA ) + CB (PB ) + CC (PC )

L = PA + PB + PC

PAmin ≤ PA ≤ PA

max

PBmin ≤ PB ≤ PB

max

PCmin ≤ PC ≤ PC

max

Owner
Note
Generating unit capacity constraints take the form of **inequality** constraints, as in the GNPP.
Page 4: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 4

Introduction toOptimisation

“An Engineer is someone who can do for onedollar what any fool can do for two”

Page 5: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 5

Objective

• Achieving the best possible design oroperating conditions requires a way ofmeasuring the goodness of this result

• We will call this measure our objective F

• Examples:n Minimise cost of building a transformer

n Minimise cost of supplying power

n Minimise losses in a power system

n Maximise profit from a bidding strategy

Decision Variables

• The value of the objective is a function ofsome decision variables:

• Examples of decision variables:n Dimensions of the transformer

n Output of generating units, position of taps

n Parameters of bids for selling electrical energy

F = f x 1 ,x 2 ,x 3 ,Kx n( )

Page 6: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 6

Optimisation Problem

• What value should the decision variables takeso that is minimum ormaximum?

F = f x 1 ,x 2 ,x 3 ,Kx n( )

Example: function of one variable

x

f(x)

x*

f(x*)

f(x) is maximum for x = x*

Page 7: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 7

Minimisation and Maximisation

x

f(x)

x*

f(x*)

If x = x* maximises f(x) then it minimises - f(x)

-f(x)-f(x*)

Minimisation and Maximisation

• Maximising f(x) is thus the same thing asminimising g(x) = -f(x)

• Minimisation and maximisation problems arethus interchangeable

• Depending on the problem, the optimum iseither a maximum or a minimum

Page 8: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 8

Necessary Condition for Optimality

x

f(x)

x*

f(x*)

If x = x * maximises f ( x ) then:

f ( x ) < f ( x * ) for x < x * ⇒dfdx

> 0 for x < x *

f ( x ) < f ( x * ) for x > x * ⇒dfdx

< 0 for x > x *

dfdx

< 0

dfdx

> 0

Necessary Condition for Optimality

x

f(x)

x*

If x = x * maximises f ( x ) then dfdx

= 0 for x = x *

dfdx

= 0

Page 9: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 9

Example

x

f(x)

dfdx

= 0For what values of x is ?

In other words, for what values of x is the necessary condition foroptimality satisfied?

Example

x

f(x)

A B C D

• A and D are maxima

• B is a minimum

• C is an inflexion point

Page 10: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 10

How can we distinguish minimaand maxima?

x

f(x)

A B C D

For x = A and x = D, we have: d 2 fdx 2 < 0

The objective function is concave around a maximum

How can we distinguish minimaand maxima?

x

f(x)

A B C D

For x = B we have: d 2 fdx 2 > 0

The objective function is convex around a minimum

Page 11: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 11

How can we distinguish minimaand maxima?

x

f(x)

A B C D

For x = C , we have: d 2 fdx 2 = 0

The objective function is flat around an inflexion point

Necessary and SufficientConditions of Optimality

• Necessary condition:

• Sufficient condition:n For a maximum:

n For a minimum:

dfdx

= 0

d 2 fdx 2 > 0

d 2 fdx 2 < 0

Page 12: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 12

Isn’t all this obvious?

• Can’t we tell all this by looking at the objectivefunction?

n Yes, for a simple, one-dimensional case when weknow the shape of the objective function

n For complex, multi-dimensional cases (i.e. withmany decision variables) we can’t visualise theshape of the objective function

n We must then rely on mathematical techniques

Feasible Set

• The values that the decision variables cantake are usually limited

• Examples:n Physical dimensions of a transformer must be

positive

n Active power output of a generator may be limitedto a certain range (e.g. 200 MW to 500 MW)

n Reactive power output of a generator may belimited to a certain range (e.g. -100 MVAr to 150MVAr)

Page 13: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 13

Feasible Set

x

f(x)

A D xMAXxMIN

Feasible Set

The values of the objective function outside the feasible set do not matter

x

f(x)

A D xMAXxMIN B E

Interior and Boundary Solutions

• A and D are interior maxima

• B and E are interior minima

• XMIN is a boundary minimum

• XMAX is a boundary maximumDo not satisfy theOptimality conditions!

Owner
Note
That is, these types of "boundary" optimal points cannot be found by means of the previous calculus-determined optimality conditions. For example,you cannot find X^{MIN} by determining where the first derivative of f(x) equals 0 and the second derivative of f(x) is greater than zero.
Page 14: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 14

Two-Dimensional Case

x1

x2

f(x1,x2)

x2*

x1*

f(x1,x2) is minimum for x1*, x2

*

Necessary Conditions for Optimality

x1

x2

f(x1,x2)

x2*

x1*

∂f (x1 ,x 2 )∂x1 x1

* ,x2*

= 0

∂f (x1 ,x 2 )∂x 2 x

1* ,x

2*

= 0

Page 15: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 15

Multi-Dimensional Case

At a maximum or minimum value of

f x 1 , x 2 , x 3 ,Kx n( )we must have:

∂f∂x 1

= 0

∂f∂x 2

= 0

M

∂f∂x n

= 0

A point where these conditions are satisfied is called a stationary point

Sufficient Conditions for Optimality

x1

x2

f(x1,x2) minimum maximum

Page 16: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 16

Sufficient Conditions for Optimality

x1

x2

f(x1,x2)

Saddle point

Sufficient Conditions for Optimality

∂ 2 f∂x 1

2

∂ 2 f∂x 1∂x 2

L∂ 2 f

∂x 1∂x n

∂ 2 f∂x 2 ∂x 1

∂ 2 f∂x 2

2L

∂ 2 f∂x 2 ∂x n

M M O M∂ 2 f

∂x n ∂x1

∂ 2 f∂x n ∂x 2

L∂ 2 f∂x n

2

Calculate the Hessian matrix at the stationary point:

Page 17: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 17

Sufficient Conditions for Optimality

• Calculate the eigenvalues of the Hessian matrixat the stationary point

• If all the eigenvalues are greater or equal to zero:n The matrix is positive semi-definite

n The stationary point is a minimum

• If all the eigenvalues are less or equal to zero:n The matrix is negative semi-definite

n The stationary point is a maximum

• If some or the eigenvalues are positive and otherare negative:

n The stationary point is a saddle point

Contours

x1

x2

f(x1,x2)

F1 F2

F2

F1

Page 18: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 18

Contours

x1

x2

Minimum or maximum

A contour is the locus of all the point that give the same valueto the objective function

Example 1

Minimise C = x 12 + 4 x 2

2 − 2 x 1 x 2

Necessary conditions for optimality:

∂C∂x 1

= 2 x 1 − 2 x 2 = 0

∂C∂x 2

= −2 x 1 + 8 x 2 = 0

x 1 = 0

x 2 = 0

is a stationarypoint

Page 19: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 19

Example 1

Sufficient conditions for optimality:

Hessian Matrix:

∂ 2 C∂x 1

2

∂ 2 C∂x 1 ∂x 2

∂ 2 C∂x 2 ∂x 1

∂ 2 C∂x 2

2

=2 −2

−2 8

must be positive definite (i.e. all eigenvalues must be positive)

λ − 2 22 λ − 8

= 0 ⇒ λ 2 − 10 λ + 12 = 0

⇒ λ = 10 ± 522

≥ 0The stationary point is a minimum

Example 1

x1

x2

C=1C=4

C=9

Minimum: C=0

Owner
Note
Actually, for STRICT local minimum: C(x*) < C(x) for all other x in a local neighborhood of x*. x =(x_1,x_2)T
Admin
Solving for eigenvalue lambda: Let ID = 2x2 identity matrix and let H denote the 2x2 Hessian matrix. Det [ lambda ID - H ] = 0
Page 20: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 20

Example 2

Minimise C = − x 12 + 3 x 2

2 + 2 x1 x 2

Necessary conditions for optimality:

∂C∂x 1

= −2 x 1 + 2 x 2 = 0

∂C∂x 2

= 2 x 1 + 6 x 2 = 0

x 1 = 0

x 2 = 0

is a stationarypoint

Example 2Sufficient conditions for optimality:

Hessian Matrix:

∂ 2 C∂x 1

2

∂ 2 C∂x 1 ∂x 2

∂ 2 C∂x 2 ∂x 1

∂ 2 C∂x 2

2

=−2 22 6

λ + 2 −2−2 λ − 6

= 0 ⇒ λ 2 − 4 λ − 8 = 0

⇒ λ = 4 + 802

> 0

or λ = 4 − 802

< 0

The stationary point is a saddle point

Owner
Note
Method for finding an eigenvalue lambda for the indicated Hessian matrix H. Det (lambda ID - H ) =0 Here there is one positive and one negative eigenvalue at the stationary point x_1=0, x_2=0, so it is a saddle point.
Page 21: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 21

Example 2

x1

x2

C=1

C=4

C=9

C=1

C=4

C=9

C=-1 C=-4 C=-9

C=0

C=0

C=-9 C=-4

Optimisation with Constraints

Page 22: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 22

Optimisation with Equality Constraints

• There are usually restrictions on the valuesthat the decision variables can take

Minimise

f x 1 ,x 2 ,K,x n( )subject to:

ω1 x 1 ,x 2 ,K,x n( ) = 0

M

ω m x 1 ,x 2 ,K,x n( ) = 0

Objective function

Equality constraints

Number of Constraints

• N decision variables

• M equality constraints

• If M > N, the problems is over-constrainedn There is usually no solution

• If M = N, the problem is determinedn There may be a solution

• If M < N, the problem is under-constrainedn There is usually room for optimisation

Page 23: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 23

Example 1

Minimise f x 1 ,x 2( ) = 0.25 x12 + x 2

2

Subject to ω x1 ,x 2( ) ≡ 5 − x1 − x 2 = 0

x1

x2

ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0

f x 1 ,x 2( ) = 0.25 x 12 + x 2

2

Minimum

Example 2: Economic Dispatch

LG1 G2

x1 x2

C1 = a1 + b1 x 12

C 2 = a2 + b2 x 22

C = C1 + C 2 = a1 + a 2 + b1 x 12 + b2 x 2

2

Cost of running unit 1

Cost of running unit 2

Total cost

Optimisation problem:

Minimise C = a1 + a 2 + b1 x 12 + b2 x 2

2

Subject to: x 1 + x 2 = L

Owner
Note
Note the way that Kirschen represents equality constraints of the form h(x)=c. He puts them in the form w(x) = [c - h(x)] = 0 Note the placement of the constraint constant c. This will be important later for the interpretation of the Lagrange multipliers.
Owner
Note
Kirschen assumes, implicitly, that all of the cost coefficients a_i and b_i are positive. It is typical to write expressions so that all signed coefficients or parameters are positively signed.
Page 24: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 24

Solution by Substitution

Minimise C = a1 + a 2 + b1 x 12 + b2 x 2

2

Subject to: x 1 + x 2 = L

⇒ x 2 = L − x 1

⇒ C = a1 + a 2 + b1 x 12 + b2 L − x1( ) 2

Unconstrained minimisation

dCdx1

= 2b1 x 1 − 2b2 L − x 1( ) = 0

⇒ x1 =b2 L

b1 + b2 ⇒ x 2 =

b1 Lb1 + b2

d 2 Cdx1

2= 2b1 + 2b2 > 0 ⇒ minimum

Solution by Substitution

• Difficult

• Sometimes impossible when constraints arenon-linear

• Provides little or no insight into solution

Ÿ Solution using Lagrange multipliers

Page 25: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 25

Gradient

Consider a function f ( x 1 , x 2 ,K,x n )

The gradient of f is the vector ∇f =

∂f∂x1

∂f∂x 2

M∂f

∂x n

Properties of the Gradient

• Each component of the gradient vectorindicates the rate of change of the function inthat direction

• The gradient indicates the direction in which afunction of several variables increases mostrapidly

• The magnitude and direction of the gradientusually depend on the point considered

• At each point, the gradient is perpendicular tothe contour of the function

Owner
Note
The standard convention is that the derivative of a scalar f(x) with respect to a column (row) vector x is a row (column) vector. Here presumably Kirschen is interpreting x as a row vector (or else is departing from the convention). In my lecture notes titled "Optimization Basics" I adopt the common convention that x is a column vector, hence the gradient of f(x) with respect to x is a row vector. This changes the appearance of the Lagrangean function with respect to placement of multiplied factors, but it does not affect the actual solution values!
Owner
Note
Remarks on "Direction of Steepest Ascent" Claim: Let Deltaf(x*) denote the gradient of f at a point x* where the gradient of f does not vanish. Let s* = Deltaf(x*)/|Deltaf(x*)| denote a vector of magnitude 1 that points in the same direction as Deltaf(x*). Then, for any *other* vector s of magnitude 1, it can be shown that Deltaf(x*)s* > Deltaf(x*)s. This in turn implies that f(x* + bs*) - f(x*) > f(x* + bs) - f(x*) for all scalar b values in the interval 0 < b < epsilon for some suitably small epsilon. Thus, the gradient of f at x* is "the direction of steepest ascent" at x* for all equal-distance movements away from x* in some suitably small neighborhood of x*. To prove this, make use of the fact that the inner product cd of two n-dimensional vectors c and d takes the form cd = |c||d|cos(theta), where theta is the angle between c and d, and cos(theta) for a right triangle equals the length of the adjacent side over the length of the hypotenuse, implying cos(theta)=1 for 0 =< theta =< pi if and only if theta = 0 .
Page 26: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 26

Example 3

f ( x ,y ) = ax 2 + by 2

∇f =

∂f∂x∂f∂y

=2 ax2 by

x

y

Example 4

f ( x ,y ) = ax + by

∇f =

∂f∂x∂f∂y

=ab

x

y

f = f 1

f = f 2

f = f 3

∇f

∇f

∇f

Page 27: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 27

Lagrange Multipliers

f = 0.25 x 12 + x 2

2 = 6

ω x 1 ,x 2( ) = 5 − x 1 − x 2

Minimise f x 1 ,x 2( ) = 0.25 x12 + x 2

2 subject to ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0

f = 0.25 x 12 + x 2

2 = 5

Lagrange Multipliers

f x 1 ,x 2( ) = 6

ω x 1 ,x 2( )

f x 1 ,x 2( ) = 5

∇f =

∂f∂x1

∂f∂x 2

∇f

∇f

∇f

Page 28: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 28

Lagrange Multipliers

f x 1 ,x 2( ) = 6

ω x 1 ,x 2( )

f x 1 ,x 2( ) = 5

∇ω =

∂ω∂x1

∂ω∂x 2

∇ω

∇ω

∇ω

Lagrange Multipliers

f x 1 ,x 2( ) = 6

ω x 1 ,x 2( )

f x 1 ,x 2( ) = 5

• The solution must be on the constraint• To reduce the value of f, we must move in a direction opposite to the gradient

∇f

∇f

∇f

Page 29: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 29

Lagrange Multipliers

f x 1 ,x 2( ) = 6

ω x 1 ,x 2( )

f x 1 ,x 2( ) = 5

• We stop when the gradient of the function is perpendicular to the constraint because moving further would increase the value of the function

At the optimum, the gradient of the function is parallel to the gradient of the constraint

∇ω

∇ω

∇ω

∇f

∇f

∇f

Lagrange Multipliers

At the optimum, we must have:

∇f ∇ω

Which can be expressed as:

∇f + λ ∇ω = 0

λ is called the Lagrange multiplier

The constraint must also be satisfied:

ω x 1 ,x 2( ) = 0

In terms of the co-ordinates:

∂f∂x 1

+ λ ∂ω∂x 1

= 0

∂f∂x 2

+ λ ∂ω∂x 2

= 0

Owner
Note
Here Kirschen is showing you that a point x' is not an optimal solution if, at this point, the gradient vector for f(x) at x=x' is not parallel to the gradient vector for the constraint function w(x) at x=x'.
Owner
Note
The notation || means "is parallel to"
Owner
Note
So this familiar "first order necessary condition" (FONC) Deltaf(x*) = lambda Deltaw(x*) for minimization of f(x) subject to an equality constraint w(x)=0 simply says that, at an optimum point x*, it is necessary that the gradient vector Deltaf(x*) of the objective function f(x), evaluated at x*, be PARALLEL to the gradient vector Deltaw(x*) of the constraint function w(x), evaluated at x*. This parallel requirement means that either the gradient of f at x* points in the SAME direction as the gradient of w at x*, or in the OPPOSITE direction. In the first case there will be a POSITIVE real number lambda* such that Deltaf(x*) = lambda* Deltaw(x*), and in the second case this same equality will hold for some NEGATIVE real number lambda*. This is why one cannot SIGN the Lagrange multiplier solution lambda* corresponding to an equality constraint w(x)=0.
Page 30: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 30

Lagrangian Function

To simplify the writing of the conditions for optimality,it is useful to define the Lagrangian function:

l x 1 ,x 2 ,λ( ) = f x 1 ,x 2( ) + λω x 1 ,x 2( )

The necessary conditions for optimality are then given by the partial derivatives of the Lagrangian:

∂l x 1 ,x 2 ,λ( )∂x1

=∂f

∂x1+ λ

∂ω∂x 1

= 0

∂l x 1 ,x 2 ,λ( )∂x 2

=∂f

∂x 2+ λ

∂ω∂x 2

= 0

∂l x 1 ,x 2 ,λ( )∂λ

= ω x 1 ,x 2( ) = 0

Example

Minimise f x 1 ,x 2( ) = 0.25 x12 + x 2

2 subject to ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0

l x 1 ,x 2 ,λ( ) = 0.25 x12 + x 2

2 + λ 5 − x 1 − x 2( )

∂l x 1 ,x 2 ,λ( )∂x1

≡ 0.5 x 1 − λ = 0

∂l x 1 ,x 2 ,λ( )∂x 2

≡ 2 x 2 − λ = 0

∂l x 1 ,x 2 ,λ( )∂λ

≡ 5 − x1 − x 2 = 0

Owner
Note
Recall that Kirschen expresses equality constraints of the form h(x)=c as 0 = w(x) = [c - h(x)] Thus his Lagrangean function coincides with the form used in our class lecture notes titled "Optimization Basics".
Page 31: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 31

Example

∂l x 1 ,x 2 ,λ( )∂x1

≡ 0.5 x 1 − λ = 0 ⇒ x 1 = 2 λ

∂l x 1 ,x 2 ,λ( )∂x 2

≡ 2 x 2 − λ = 0 ⇒ x 2 =12

λ

∂l x 1 ,x 2 ,λ( )∂λ

≡ 5 − x1 − x 2 = 0 ⇒ 5 − 2 λ −12

λ = 0

⇒ λ = 2

⇒ x1 = 4

⇒ x 2 = 1

Example

Minimise f x 1 ,x 2( ) = 0.25 x12 + x 2

2

Subject to ω x1 ,x 2( ) ≡ 5 − x1 − x 2 = 0

x1

x2

ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0

Minimum

4

1

f x 1 , x 2( ) = 5

Page 32: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 32

Important Note!

If the constraint is of the form:

It must be included in the Lagrangian as follows:

And not as follows:

ax1 + bx 2 = L

l = f x 1 ,K,x n( ) + λ L − ax1 − bx 2( )

l = f x 1 ,K,x n( ) + λ ax1 + bx 2( )

Application to Economic Dispatch

LG1 G2

x1 x2

minimise f x 1 ,x 2( ) = C1 x 1( ) + C 2 x 2( )s.t . ω x 1 ,x 2( ) ≡ L − x 1 − x 2 = 0

l x 1 , x 2 ,λ( ) = C1 x 1( ) + C 2 x 2( ) + λ L − x1 − x 2( )

∂l∂x 1

≡dC1

dx 1− λ = 0

∂l∂x 2

≡dC 2

dx 2− λ = 0

∂l∂λ

≡ L − x 1 − x 2 = 0

dC1

dx1=

dC 2

dx 2= λ

Equal incremental costsolution

Page 33: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 33

Incremental Cost

x1

C1 ( x 1 )

x2

x1

dC1

dx1

x2

dC 2

dx 2

C 2 ( x 2 )Cost curves:

Incrementalcost curves:

Interpretation of this Solution

L − x1 − x 2

x1

dC1

dx1

x2

dC 2

dx 2

λ

L +-

-

If < 0, reduce λIf > 0, increase λ

Page 34: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 34

Physical Interpretation

x

x

C ( x )

dC(x)dx

∆C

∆x

dCdx

= lim∆x→0

∆C∆x

For ∆x sufficiently small:

∆C ≈dCdx

× ∆x

If ∆x = 1MW :

∆C ≈ dCdx

The incremental cost is the cost ofone additional MW for one hour. This cost depends on the output of the generator.

Physical Interpretation

dC1

dx1: Cost of one more MW from unit 1

dC 2

dx 2: Cost of one more MW from unit 2

Suppose that dC1

dx 1>

dC 2

dx 2

Decrease output of unit 1 by 1MW ⇒ decrease in cost =dC1

dx 1

Increase output of unit 2 by 1MW ⇒ increase in cost =dC 2

dx 2

Net change in cost =dC 2

dx 2−

dC1

dx1< 0

Owner
Note
That is, when x_1 is DECREASED by 1MW, the corresponding DROP in the total cost of production is approximately dC_1/dx_1. When x_2 is INCREASED by 1MW, the corresponding INCREASE in the total cost of production is approximately given by dC_2/dx_2. Thus the OVERALL CHANGE in the total cost of production from the DECREASE in x_1 by 1MW and the INCREASE in x_2 by 1MW is [- dC_1/dx_1] + dC_2/dx_2 + < 0
Page 35: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 35

Physical Interpretation

dC1

dx1=

dC 2

dx 2= λ

It pays to increase the output of unit 2 and decrease the output of unit 1 until we have:

The Lagrange multiplier λ is thus the cost of one more MWat the optimal solution.

This is a very important result with many applications in Economics.

Generalisation

Minimise

f x 1 ,x 2 ,K,x n( )subject to:

ω1 x 1 ,x 2 ,K,x n( ) = 0

M

ω m x 1 ,x 2 ,K,x n( ) = 0

Lagrangian:

l = f x 1 ,K, x n( ) + λ 1ω 1 x 1 ,K,x n( ) + L + λ m ω m x1 ,K,x n( )

• One Lagrange multiplier for each constraint• n + m variables: x1, …, xn and λ1, …, λm

Owner
Note
More formally, as shown in the online "Optimization Basics" notes, the FONC solution (x*,lambda*), conditional on a given load L, is a function of L that satisfies: lambda* = df(x*)/dL where, for the problem at hand, x* = (x*_1,x*_2) f(x_1,x_2) = C_1(x_1) + C_2(x_2).
Page 36: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 36

Optimality Conditions

l = f x 1 ,K, x n( ) + λ 1ω 1 x 1 ,K,x n( ) + L + λ m ω m x1 ,K,x n( )

∂l∂x 1

=∂f

∂x 1+ λ 1

∂ω 1

∂x 1+ L + λ m

∂ω m

∂x 1= 0

M

∂l∂x n

=∂f

∂x n

+ λ 1∂ω 1

∂x n

+ L + λ m

∂ω m

∂x n

= 0

∂l∂λ 1

= ω 1 x 1 ,L,x n( ) = 0

M

∂l∂λ m

= ω m x 1 ,L,x n( ) = 0

n equations

m equations

n+m equations inn + m variables

Optimisation with Inequality Constraints

Minimise

f x 1 ,x 2 ,K, x n( )subject to:

ω1 x 1 ,x 2 ,K,x n( ) = 0

M

ω m x 1 ,x 2 ,K,x n( ) = 0

and:

g1 x 1 , x 2 ,K, x n( ) ≤ 0

M

gp x 1 ,x 2 ,K,x n( ) ≤ 0

Objective function

Equality constraints

Inequality constraints

Owner
Note
In the online "Optimization Basics" notes, inequalities are expressed as z(x) >= d. Such inequality constraints can be expressed in Kirschen's form by setting g(x) = [d - z(x)]
Page 37: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 37

Example: Economic Dispatch

LG1 G2

x1 x2

Minimise C = a1 + b1 x 12 + a 2 + b2 x 2

2

Subject to:

x1 + x 2 = L

x1 − x 1max ≤ 0

x1min − x 1 ≤ 0

x 2 − x 2max ≤ 0

x 2min − x 2 ≤ 0

x1min ≤ x 1 ≤ x 1

max

x 2min ≤ x 2 ≤ x 2

max

Inequality constraints

Equality constraints

Example: Economic Dispatch

x1

x2

Minimise C = a1 + b1 x 12 + a 2 + b2 x 2

2 Family of ellipses

x1 + x 2 = L

x1maxx1

min

x2min

x2max

A

Ellipse tangent to equality constraint at AInequality constraints are satisfied

Owner
Note
That is, the CONTOURS of the cost function C(x) form a family of ellipses. In particular, for any scalar value cBar, the CONTOUR for C(x) corresponding to cBar is given by {x in R^2 | C(x) = cBar } When plotted in the real plane R^2, this contour takes the form of an ellipse.
Page 38: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 38

Example: Economic Dispatch

x1

x2

x1 + x 2 = L

x1maxx1

min

x2min

x2max

A

Ellipse tangent to equality constraint at BInequality constraints are NOT satisfied!

What is the solution for a larger load?

x1 + x 2 = L'

B

Example: Economic Dispatch

x1

x2

x1 + x 2 = L

x1maxx1

min

x2min

x2max

A

C is the solution because it is the point onthe equality constraint that satisfies the inequality constraints at minimum cost

x1 + x 2 = L'

B

C

Owner
Note
Note that LOWER costs are achieved moving the SW direction, and HIGHER costs are achieved as one moves in the NE direction. Starting at C, any move to the NW along the constraint line corresponding to the larger load L' puts you on a contour of C "more to the NE," hence where costs are HIGHER (less desirable).
Page 39: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 39

Binding Inequality Constraints

• A binding inequality constraint is an equality constraintthat is satisfied exactly

• Example:n If we must have x1 ≤ x1

max

n And at the solution we have x1 = x1max

n Then the constraint x1 ≤ x1max is said to be binding

• ALL of the inequality constraints must be satisfied

• Only a FEW will be binding (or active) at any given time

• But we don’t know ahead of time which inequalityconstraints will be binding!

• All equality constraints are always binding

Solution using Lagrange Multipliers

Minimise f x 1 ,x 2 ,K,x n( )subject to:

ω i x 1 ,x 2 ,K, x n( ) = 0 i = 1,...m

g j x 1 , x 2 ,K, x n( ) ≤ 0 j = 1,... p

l x 1 ,K,x n ,λ 1 ,...,λ m ,µ 1 ,...,µ p( ) = f x 1 ,K,x n( )

+ λ i ω i x 1 ,K,x n( )i=1

m

+ µ j g j x1 ,K,x n( )j =1

p

Lagrangian function:

tesfatsi
Note
Note this form for the Lagrangean function L(x,lambda,mu) is the SAME as the form used in our class notes titled "Optimization Basics" given the equivalent alternative way of expressing Kirschen's constraints: w(x) = [c-h(x)] = 0 g(x) = [d - z(x)] =< 0
tesfatsi
Note
Recall Kirschen represents equality constraints h_i(x)=c_i by writing them as w_i(x) = [c_i-h_i(x)] = 0. Similarly, as noted earlier, we can always re-express inequality constraints of the form z_j(x) >= d_j (the form used in other class notes) in Kirschen's form g_j(x) =< 0 by writing g_j(x) = [d_j-z_j(x)] =< 0. Note that, in some cases, this transformation will require d_j to take on negative values. Given the latter form, we know from class notes that the solution vector mu*_j for the Lagrange multiplier mu_j corresponding to the jth inequality constraint g_j(x) =< 0 is given by the rate of change of the minimized function f(x*) with respect to the constraint constant d_j for the jth inequality constraint.
Page 40: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 40

Complementary Slackness Conditions

∂l x ,λ ,µ( )∂x i

≡∂f x( )

∂x i

+ λ k

∂ω k x( )∂x ik =1

m

∑ + µ j

∂g j x( )∂x i

= 0j =1

p

∑ i = 1,...n

∂l x ,λ ,µ( )∂λ k

≡ ω k x( ) = 0 k = 1,...m

g j x( ) ≤ 0 j = 1,... p

µ j g j x( ) = 0 j = 1,... p

µ j ≥ 0 j = 1,... p

Optimality Conditions

(Known as the Karam Kuhn Tucker (KKT) conditions

l x ,λ ,µ( ) = f x( ) + λ i ω i x( )i=1

m

∑ + µ j g j x( )j =1

p

Complementary Slackness Conditions

µ j g j x( ) = 0

µ j ≥ 0

Two possibilities for each constraint j:

µ j = 0 then the constraint g j ( x ) is non - binding ⇒ g j x( ) < 0

g j x( ) = 0 then the constraint g j ( x ) is binding ⇒ µ j > 0

OR

Owner
Note
This should be "Karush", not Karam! M. Karush wrote his M.S. thesis in 1939, in which these FONC conditions appeared in an appendix. His supervisor (Lawrence Graves) apparently did not encourage publication. Karush's contribution was overlooked until "re-discovered", apparently by Akira Takayama sometime in the 1970s. This was after independent discovery of essentially the same FONC by Fritz John in 1948 and by Harold Kuhn and Albert Tucker in the 1950s.
tesfatsi
Note
More precisely, it follows from the complementary slackness conditions that: IF mu_j > 0, THEN g_j(x) = 0 IF g_j(x) < 0, THEN mu_j = 0 That is all the mathematics tells us here, since it is POSSIBLE that mu_j and g_j(x) could BOTH be zero at the same time.
Page 41: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 41

Warning!

• Difficulty with the complementary slacknessconditions

n They tell us that an inequality constraint is eitherbinding or non-binding

n They DON’T tell us which constraints are binding andnon-binding

n The binding constraints have to be identified throughtrial and error

Example

Minimise

f x 1 ,x 2( ) = 0.25 x 12 + x 2

2

Subject to:

ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0

g x 1 ,x 2( ) ≡ x 1 + 0.2 x 2 − 3 ≤ 0

tesfatsi
Note
To write these inequality constraints in the form g(x) = [d-z(x)] =< 0, one would set d = - [3], z(x) = - [x_1 + 0.2x_2].
Page 42: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 42

Example

ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0

g x 1 ,x 2( ) ≡ x 1 + 0.2 x 2 − 3 ≤ 0

x1

x2

f x 1 , x 2( ) = 0.25 x 12 + x 2

2

Example

l x 1 ,x 2 ,λ ,µ( ) = f x 1 ,x 2( ) + λω x 1 ,x 2( ) + µg x 1 ,x 2( )= 0.25 x 1

2 + x 22 + λ 5 − x 1 − x 2( ) + µ x1 + 0.2 x 2 − 3( )

∂l∂x1

≡ 0.5x1 − λ + µ = 0

∂l∂x2

≡ 2x2 − λ + 0.2µ = 0

∂l∂λ

≡ 5 − x1 − x2 = 0

∂l∂µ

≡ x1 + 0.2x2 − 3 ≤ 0

µg(x) ≡ µ x1 + 0.2x2 − 3( ) = 0 and µ ≥ 0

Owner
Note
Here the shading is a bit confusing because it is on the "wrong side" of the indicated boundary line for the set of x satisfying g(x) =< 0, i.e., the shading is on the *infeasible* side where g(x) > 0. The set of feasible choice points x for the problem at hand is the intersection of the indicated w(x)=0 line and the set of points to the LEFT of the indicated g(x) = 0 line that bounds the region of x points satisfying g(x) =< 0.
Owner
Note
This is consistent with the form of the Lagrangianin our "Optimization Basic" notes. To see this, let d = - 3 z(x) = - [x_1 + 0.2x_2] Then the desired inequality constraint takes the form z(x) >= d and this portion of the Lagrangian function takes the form + mu [d - z(x)].
Page 43: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 43

Example

KKT conditions do not tell us if inequality constraint is bindingMust use a trial and error approach

Trial 1: Assume inequality constraint is not binding ⇒µ=0

From solution of example without inequality constraint, weknow that the solution is then:

x1 = 4 ;x 2 = 1

but this means that:

x1 + 0.2 x 2 − 3 = 1.2 ≥ 0

This solution is thus not an acceptable solution

Example

Trial 2: Assume that the inequality constraint is binding

∂l∂λ

≡ 5 − x 1 − x 2 = 0

∂l∂µ

≡ x 1 + 0.2 x 2 − 3 = 0

∂l∂x 1

≡ 0.5 x 1 − λ + µ = 0

∂l∂x 2

≡ 2 x 2 − λ + 0.2µ = 0

x1 = 2.5

x 2 = 2.5

λ = 5.9375

µ = 4.6875

All KKT conditions are satisfied. This solution is acceptable

Page 44: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 44

Example: Graphical Solution

ω x 1 ,x 2( ) ≡ 5 − x 1 − x 2 = 0

g x 1 ,x 2( ) ≡ x 1 + 0.2 x 2 − 3 ≤ 0

x1

x2

f x 1 , x 2( ) = 0.25 x 12 + x 2

2

Solution of problemwith inequality constraint

Solution of problemwithout inequality constraint

Solution of problemwithout constraints

Application to Economic Dispatch

LG1 G2

x1 x2

x1min ≤ x 1 ≤ x 1

max

minimise f x 1 , x 2( ) = C1 x 1( ) + C 2 x 2( )s.t . ω x 1 ,x 2( ) ≡ L − x 1 − x 2 = 0

x 2min ≤ x 2 ≤ x 2

max

g1 x 1 , x 2( ) ≡ x 1 − x1max ≤ 0

g2 x 1 ,x 2( ) ≡ x 1min − x1 ≤ 0

g3 x 1 ,x 2( ) ≡ x 2 − x 2max ≤ 0

g4 x 1 ,x 2( ) ≡ x 2min − x 2 ≤ 0

Page 45: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 45

Application to Economic Dispatch

l x 1 ,x 2 ,λ ,µ 1 ,µ 2 ,µ 3 ,µ 4( ) = C1 x 1( ) + C 2 x 2( ) + λ L − x 1 − x 2( )+ µ 1 x1 − x 1

max( ) + µ 2 x1min − x 1( )

+ µ 3 x 2 − x 2max( ) + µ 4 x 2

min − x 2( )KKT Conditions:

∂l∂x 1

≡dC1

dx 1− λ + µ 1 − µ 2 = 0

∂l∂x 2

≡dC 2

dx 2− λ + µ 3 − µ 4 = 0

∂l∂λ

≡ L − x 1 − x 2 = 0

Application to Economic Dispatch

KKT Conditions (continued):

∂l∂µ 1

≡ x 1 − x 1max ≤ 0

µ1 ( x 1 − x1max ) = 0 ; µ1 ≥ 0

∂l∂µ 2

≡ x 1min − x 1 ≤ 0

∂l∂µ 3

≡ x 2 − x 2max ≤ 0

∂l∂µ 4

≡ x 2min − x 2 ≤ 0

µ 2 ( x 1min − x1 ) = 0 ; µ 2 ≥ 0

µ 3 ( x 2 − x 2max ) = 0 ; µ 3 ≥ 0

µ 4 ( x 2min − x 2 ) = 0 ; µ 4 ≥ 0

Page 46: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 46

Solving the KKT Equations

Trial #1: No generator is at a limit

No inequality constraint is binding ⇒ all µ’s are equal to zero

∂l∂x 1

≡dC1

dx 1− λ = 0

∂l∂x 2

≡dC 2

dx 2− λ = 0

∂l∂λ

≡ L − x 1 − x 2 = 0

dC1

dx1=

dC 2

dx 2= λ

All generators operate at the same incremental cost

Solving the KKT Equations

Trial #2: Generator 1 is at its upper limit; Other limits not binding

∂l∂x 1

≡dC1

dx 1− λ + µ 1 = 0

∂l∂x 2

≡dC 2

dx 2− λ = 0

All generators do NOT operate at the same incremental cost!

The incremental cost of unit 1 is lower

If that was possible, more power would be produced by unit 1

x1 − x1max = 0 ⇒ µ1 ≥ 0; µ2 = µ3 = µ4 = 0

dC1

dx1= λ − µ1 ≤ λ

dC 2

dx 2= λ

Page 47: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 47

Solving the KKT Equations

Trial #3: Generator 1 is at its lower limit; other limits not binding

∂l∂x 1

≡dC1

dx 1− λ − µ 2 = 0

∂l∂x 2

≡dC 2

dx 2− λ = 0

Again, all generators do NOT operate at the same incremental cost!

The incremental cost of unit 1 is higher

If that was possible, less power would be produced by unit 1

x1min − x1 = 0 ⇒ µ2 ≥ 0; µ1 = µ3 = µ4 = 0

dC1

dx1= λ + µ 2 ≥ λ

dC 2

dx 2= λ

Physical Interpretation of Lagrange Multipliers

Minimise C ( x )

subject to: ω x( ) = L ⇔ L − ω x( ) = 0

and: g ( x ) ≥ K ⇔ K − g ( x ) ≤ 0

l x ,λ ,µ( ) = C ( x ) + λ L − ω x( )( ) + µ K − g ( x )( )

At the optimum x * ,λ * ,µ * : = 0 = 0

l x * ,λ * ,µ *( ) = C ( x * )

∂l∂L Optimum

= λ *

∂l∂K Optimum

= µ *

Marginal cost of equality constraint

Marginal cost of inequality constraint

Owner
Note
For clarity, it would have been preferable to introduce a DIFFERENT name for the "g(x)" function here since it is not the same as the previous g(x) function. For example, the "g(x)" function here might instead be named z(x) and the "constraint constant" K could instead be called d, as in our other class notes!
Owner
Note
CAUTION: The brackets below each term INCLUDE the multiplier. That is, by the FONC (in particular, the "complementary slackness" conditions), the FONC solution values (x*,lambda*,mu*) satisfy 0 = lambda* [L - w(x*)] 0 = mu* [K - g(x*)]
Page 48: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 48

Physical Interpretation of Lagrange Multipliers

• Constraints always increase the cost of asolution

• The Lagrange multipliers give theincremental cost of the binding constraintsat the optimum

• They are sometimes called shadow costs

• Non-binding inequality constraints have azero incremental cost

Practical Economic Dispatch

Owner
Note
More precisely, when inequality constraints take the form z(x) >= d , then an INCREASE in the constraint constant d SHRINKS the size of the choice set for x, and CONVERSELY, a DECREASE in d EXPANDS the size of the choice set for x. It follows that costs must either go up or stay the same if d increases; they cannot get smaller! Similarly, costs must either decline or stay the same if d decreases; they cannot go up. It follows that the multiplier vector mu corresponding to z(x) >=d must be nonnegative, since it gives the CHANGE in minimized costs with respect to a CHANGE in d, and this derivative is non-negative.
Page 49: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 49

Equal Incremental Cost Dispatch

PA

dC A

dPA

PB

dC B

dPB

PC

dCC

dPC

+

PA + PB + PC

λ

Implementation: Lambda SearchAlgorithm

1. Choose a starting value for λ

2. Calculate PA,PB,PC such that ∂CA (PA )∂PA

= ∂CB (PB )∂PB

= ∂CC (PC )∂PC

= λ

3. If one of these values exceeds its lower or upper limit, fix it at that limit

4. Calculate PTOTAL = PA + PB + PC

5. If PTOTAL < L, then increase λ Else If PTOTAL > L, then decrease λ Else If PTOTAL ≈ L, then exit

6. Go To Step 2

Owner
Note
CAUTION: In returning to Step 2 for the nth time (with n > 1), you should only include dispatch levels for firms whose dispatch levels have not already been determined by fixing them at their upper or lower capacity limits. There is a sense in which Kirschen's form of the algorithm is technically correct here, but it requires a careful interpretation of "marginal cost" in the presence of binding capacity constraints on the generators. More precisely, to correctly interpret what Kirschen is saying here, we have to understand what capacity constraints do to the form of the cost function. In effect, capacity constraints introduce "kink points" in cost curves and "jump points" in marginal cost curves. To be able to conclude that all marginal costs are still "equal" at the optimal solution we have to include vertical segments of marginal cost curves and speak about "left hand" and "right hand" marginal costs. (See Kirschen's examples on the next couple of pages.) As previously noted (pages 46-47, once an inequality constraint is binding on some dispatch level, e.g., say an upper bound on P_A, the "left hand" marginal cost of firm A can be strictly lower than the marginal costs of the firms supplying the remaining dispatch amounts, e.g., P_B and P_C. However, the "right hand" marginal cost of firm A at its upper operating capacity limit is effectively infinite. Consequently, whatever lambda turns out to be at the optimal solution x*, it will fall in between the left-hand and right-hand marginal cost of firm A at its optimal dispatch point. It is in this sense that the marginal cost of firm A can be said to still be "equal" to lambda at the solution.
Page 50: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 50

Piecewise Linear Cost Curves

PA

dC A

dPA

PB

dC B

dPB

PA

CA

PB

CB

λ

Economic Dispatch with PiecewiseLinear Cost Curves

PA

dC A

dPA

PB

dC B

dPB

PA

CA

PB

CB

Owner
Note
REMARK: Piecewise linear cost functions are "piecewise differentiable". Namely, they are differentiable everywhere except at their kink points.
Page 51: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 51

λ

Economic Dispatch with PiecewiseLinear Cost Curves

PA

dC A

dPA

PB

dC B

dPB

PA

CA

PB

CB

λ

Economic Dispatch with PiecewiseLinear Cost Curves

PA

dC A

dPA

PB

dC B

dPB

PA

CA

PB

CB

Page 52: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 52

λ

Economic Dispatch with PiecewiseLinear Cost Curves

PA

dC A

dPA

PB

dC B

dPB

PA

CA

PB

CB

λ

Economic Dispatch with PiecewiseLinear Cost Curves

PA

dC A

dPA

PB

dC B

dPB

PA

CA

PB

CB

Page 53: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 53

Economic Dispatch with PiecewiseLinear Cost Curves

• All generators except one are at breakpoints of theircost curve

• The marginal generator is between breakpoints tobalance the load and generation

• Not well-suited for lambda-search algorithm• Very fast table lookup algorithm:

n Rank all the segments of the piecewise linear costcurves in order of incremental cost

n First dispatch all the units at their minimumgeneration

n Then go down the table until the generationmatches the load

Example

PA

dC A

dPA

PB

dC B

dPB

20

30

70

80

110

120

140

150

0.1

0.6

0.8

0.30.5

0.7

Unit PSegment Ptotal Lambda

A&B min 20+30 = 50 50

B 70-20=50 100 0.1

A 80-30=50 150 0.3

A 120-80=40 190 0.5

B 110-70=40 230 0.6

A 150-120=30 260 0.7

B 140-110=30 290 0.8

If Pload = 210MW

The optimal economic dispatch is:

PA = 120MW

PB = 90 MW

Lambda = 0.6

Owner
Note
What Kirschen is doing here in this table is constructing the total supply schedule from the individual supply schedules of firms A and B. It would have been useful to PLOT this total supply schedule in the price-quantity plane, as is done for the individual firm supply schedules. We learned how to do this in earlier class notes titled "Micro Basics". The "lambda" column gives the minimum acceptable sale price (i.e., the reservation sale price) for each additional block of real power offered along the total supply schedule. In the "Micro Basics" class notes the successive "blocks" offered by sellers were always assumed to consist of 1-unit increments, whereas here the generators are permitted to offer successive blocks of power that are arbitrary in size (e.g., 50 MW, 40MW, etc.).
Owner
Note
The "optimal dispatch" is simply the intersection of the total supply schedule with the total demand schedule. For the case at hand, the total demand schedule is a vertical line at the level 210MW of the given fixed demand (load), hence the price-elasticity of demand is 0. More precisely, recall that the price-elasticity of demand is defined to be the percentage change in quantity demanded per percentage change in price. Here there is NO change in quantity demanded regardless of the price, so the price-elasticity of demand is zero.
Page 54: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 54

Network Considerations

• Assumed so far that all generators and loadsare located on the same bus

• Ignored network effectsn Losses

n Transmission constraints

A B L

A more realistic situation...

• Generator A is cheaper but supplying theload from A causes more losses in thetransmission system

• Need to take the losses into considerationwhen doing the optimisation

A B

Cheap generator More expensive generator

Owner
Note
Network considerations (line losses, congestion) are taken up in Kirschen/Strbac Chapter 6. We will come to this chapter in a later section of the course and address these issues more carefully at that time.
Page 55: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 55

Economic Dispatch withTransmission Losses

• Lagrangian Function:

• Conditions for Optimality:

L = CA (PA ) + CB (PB ) + λ L + PL − PA − PB( )

∂L∂PA

≡ dCAdPA

− λ 1 − ∂PL∂PA

= 0

∂L∂PB

≡ dCBdPB

− λ 1 − ∂PL∂PB

= 0

∂L∂λ

≡ L + PL − PA − PB = 0

1

1 − ∂PL∂PA

⋅ dCAdPA

= 1

1 − ∂PL∂PB

⋅ dCBdPB

= λ

Losses

Incremental Losses and PenaltyFactors

• Incremental generation costs multiplied by thepenalty factors to take losses into account

Incremental Loss for bus A

Penalty Factor for bus A

∂PL∂PA

1

1 − ∂PL∂PA

= PFA

PFAdCAdPA

= PFBdCBdPB

= λ

Page 56: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 56

Limitations of this Approach

• To calculate the penalty factors, we need to know therelation between the losses and each generator’soutput

• This relation is complex and depends on all theinjections in the system

• Approximate formulas have been developed but needto be adjusted for each system configuration

• Does not take network constraints into account

Ë A rigorous solution requires an Optimal Power Flow(OPF)

Local and Global Optima

Page 57: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 57

Which one is the real maximum?

x

f(x)

A D

For x = A and x = D, we have: dfdx

= 0 and d 2 fdx 2 < 0

Which one is the real optimum?

A, B,C and D!are all minima because we have: ∂f

∂x1

= 0;! ∂f∂x2

= 0 and ∂2 f∂x1

2 < 0;! ∂2 f∂x2

2 < 0;! ∂2 f∂x1 ∂x2

< 0

x1

x2

B

A

C

D

Page 58: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 58

Local and Global Optima

• The optimality conditions are local conditions

• They do not compare separate optima

• If I find an optimum can I be sure that it is theglobal optimum?

• In general, to find the global optimum, wemust find and compare all the optima

• In large problems, this can be very difficultand time consuming

Convexity

• If the feasible set is convex and the objectivefunction is convex, there is only one minimumand it is thus the global minimum

Page 59: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 59

Examples of Convex Feasible Sets

x1

x2

x1

x2

x1x1

x2

x1min x1

max

Example of Non-Convex Feasible Sets

x1

x2

x1

x2

x1

x2

x1x1a x1

dx1b x1

c

x1

Page 60: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 60

Example of Convex Feasible Sets

x1

x2

x1

x2

x1

x2

A set is convex if, for any two points belonging to the set, all the points on the straight line joining these two points belong to the set

x1x1min x1

max

Example of Non-Convex Feasible Sets

x1

x2

x1

x2

x1

x2

x1x1a x1

dx1b x1

c

x1

Page 61: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 61

Example of Convex Function

x

f(x)

Example of Convex Function

x1

x2

Page 62: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 62

Example of Non-Convex Function

x

f(x)

Example of Non-Convex Function

x1

x2

B

A

C

D

Page 63: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 63

Definition of a Convex Function

x

f(x)

xa xby

f(y)

z

A convex function is a function such that, for any two points xa and xbbelonging to the feasible set and any k such that 0 ≤ k ≤1, we have:

z = kf x a( ) + (1− k ) f x b( ) ≥ f y( ) = f kx a + 1− k( ) x b[ ]

Example of Non-Convex Function

x

f(x)

Page 64: Economic Dispatch and Introduction to Optimisation

© Daniel KirschenThe University of Manchester, 2004 64

Importance of Convexity

• If we can prove that a minimisation problem isconvex:

n Convex feasible set

n Convex objective function

ËThen, the problem has one and only one solution

• Proving convexity is often difficult

• Power system problems are usually not convex

ËThere may be more than one solution to powersystem optimisation problems