unit commitment by genetic algorithm with penalty methods and a comparison of lagrangian search and...

8
ELSEVIER Elecrrical Power & Energy Syams Vol. 18, No. 6, pp. 339-346, 1996 Copyright 0 1996 Published by Elsevier Science Ltd Printed in Great Britain. All rights reserved 0142-0615(95)00013-l 0142-0615/96/$15.00+0.00 Unit commitment by genetic algorithm with penaltymethods and a comparison of Lagrangian search and genetic algorithm-economic disDatch examtlle G B ShebE, T T Mlaifeld K Brittig, G Fahd and S Fukurozaki-Coppinger Electrical Engineering Department, Iowa State University, Ames, IA 51001 1, USA A genetic algorithm is a random search procedure which is based on the survival of the fittest theory. This paper presents the genetic algorithm applied to the unit commit- ment scheduling problem and to the economic dispatch of generating units. The first half of the paper applies the genetic algorithm to the unit commitment scheduling problem, which is the problem of determining the optimal set of generating units within a power system, to be used during the next one to seven days. Thejrst halfof the paper presents an explanation of the genetic-based unit commit- ment algorithm, the implementation of this algorithm and a discussion of the problems encountered when using this algorithm with penalty methods for unit commitment scheduling. The second half of the paper applies a genetic algorithm to solve the economic dispatch problem. Using the economic dispatch p,roblem as a basic for comparison, several approaches to implementing a refined genetic algorithm are explored. The results are vertjied for a sample problem using a classical Lagrangian search techni- que. Copyright 0 1996 Published by Elsevier Science Ltd. Keywords: unit commitment, genetic algorithm, schedul- ing, optimization, Lagrangian relaxation, dispatch I. Introduction A need for optimal&y exists in the highly nonlinear and computationally diEcult power systems environment. Gen- etic algorithms (GAS), unlike strict mathematical methods, have the apparent ability to adapt to nonlinearities and discontinuities commonly found in power systems’. Received 3 February 1994; accepted 26 October 1994 II. Genetic algorithms GAS are a global optimization technique based on the operations observed in natural selection and genetics. They operate on string structures, typically a concate- nated list of binary digits representing a coding of the parameters for a given problem. Many such string struc- tures are considered simultaneously, with the most fit of these structures receiving exponentially increasing oppor- tunities to pass on genetically important material to successive generations of string structures. In this way, GAS search from many points in the search space at once, and yet continually narrow the focus of the search to the areas of the observed best performance. GAS differ from more traditional optimization techniques in four impor- tant ways2. ?? GAS use objective function information (evaluation of a given function using the parameters encoded in the string structure) to guide the search, not derivatives or other auxiliary information. ?? GAS use a coding of the parameters used to calculate the objective function in guiding the search, not the para- meters themselves. ?? GAS search through many points in the solution space at one time, not a single point. ?? GAS use probabilistic rules, not deterministic rules, in moving from one set of solutions (a population) to the next. Three basic operators comprise a GA. These three opera- tors are reproduction, crossover, and mutation. Repro- duction effectively selects the fittest of the strings in the current population to be used in generating the next population. In this way, relevant information concerning 339

Upload: independent

Post on 09-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

ELSEVIER

Elecrrical Power & Energy Syams Vol. 18, No. 6, pp. 339-346, 1996 Copyright 0 1996 Published by Elsevier Science Ltd

Printed in Great Britain. All rights reserved 0142-0615(95)00013-l 0142-0615/96/$15.00+0.00

Unit commitment by genetic algorithm with penalty methods and a comparison of Lagrangian search and genetic algorithm-economic disDatch examtlle

G B ShebE, T T Mlaifeld K Brittig, G Fahd and S Fukurozaki-Coppinger Electrical Engineering Department, Iowa State University, Ames, IA 51001 1, USA

A genetic algorithm is a random search procedure which is based on the survival of the fittest theory. This paper presents the genetic algorithm applied to the unit commit- ment scheduling problem and to the economic dispatch of generating units. The first half of the paper applies the genetic algorithm to the unit commitment scheduling problem, which is the problem of determining the optimal set of generating units within a power system, to be used during the next one to seven days. Thejrst halfof the paper presents an explanation of the genetic-based unit commit- ment algorithm, the implementation of this algorithm and a discussion of the problems encountered when using this algorithm with penalty methods for unit commitment scheduling. The second half of the paper applies a genetic algorithm to solve the economic dispatch problem. Using the economic dispatch p,roblem as a basic for comparison, several approaches to implementing a refined genetic algorithm are explored. The results are vertjied for a sample problem using a classical Lagrangian search techni- que. Copyright 0 1996 Published by Elsevier Science Ltd.

Keywords: unit commitment, genetic algorithm, schedul- ing, optimization, Lagrangian relaxation, dispatch

I. Introduction A need for optimal&y exists in the highly nonlinear and computationally diEcult power systems environment. Gen- etic algorithms (GAS), unlike strict mathematical methods, have the apparent ability to adapt to nonlinearities and discontinuities commonly found in power systems’.

Received 3 February 1994; accepted 26 October 1994

II. Genetic algorithms GAS are a global optimization technique based on the operations observed in natural selection and genetics. They operate on string structures, typically a concate- nated list of binary digits representing a coding of the parameters for a given problem. Many such string struc- tures are considered simultaneously, with the most fit of these structures receiving exponentially increasing oppor- tunities to pass on genetically important material to successive generations of string structures. In this way, GAS search from many points in the search space at once, and yet continually narrow the focus of the search to the areas of the observed best performance. GAS differ from more traditional optimization techniques in four impor- tant ways2.

??GAS use objective function information (evaluation of a given function using the parameters encoded in the string structure) to guide the search, not derivatives or other auxiliary information.

??GAS use a coding of the parameters used to calculate the objective function in guiding the search, not the para- meters themselves.

??GAS search through many points in the solution space at one time, not a single point.

??GAS use probabilistic rules, not deterministic rules, in moving from one set of solutions (a population) to the next.

Three basic operators comprise a GA. These three opera- tors are reproduction, crossover, and mutation. Repro- duction effectively selects the fittest of the strings in the current population to be used in generating the next population. In this way, relevant information concerning

339

340 Genetic algorithm-economic dispatch example: G. B. Sheblk et al.

the fitness of a string is passed along to successive generations. It can be shown that GAS actually allocate exponentially increasing trials to the most fit of these strings. Crossover serves as a mechanism by which strings can exchange information, possibly creating more highly fit strings in the process and allowing the exploration of new regions of the search space. The last of the GA operators is mutation, and is generally considered a secondary operator. Mutation ensures that a string position will never be fixed at a certain value for all time.

The fundamental theorem of GAS-schemata with above average fitness values, short defining lengths, and low order are given an exponentially increasing number of trials as the search progresses. Schemata are similarity templates which describe a subset of strings with simila- rities at certain string positions. Defining length of a schema is the distance between the first and the last specific string position. Order of a schema is defined as the fixed number of positions present in a template. The highly fit, low order schemata are termed building blocks. Just as a building is constructed from the ground up by using many small, strong bricks, strings in a GA are constructed by reproducing short, low order, highly fit schemata and exchanging this information between strings. Thus, the best strings in a given population are reproduced and allowed to share, with other highly fit strings, the informa- tion that has allowed these strings to survive.

One of the main benefits of using a GA is that it can easily be implemented into concurrent processing, since most of the computation involved in performing a GA is done on individual strings. Much of this individual work can be performed in parallel by splitting the population into disjoint sub-populations and operating them in par- alle1334. Owing to computer constraints, the GA will be performed on a single processor, but it could easily be implemented into concurrent processing. By doing so, the program’s run time will decrease linearly by the number of processors being used, the maximum number of processors being less than or equal to the GA population.

III. Unit commitment Unit commitment (UC) solution techniques use many assumptions to try to simplify and reduce computational effort in the unit commitment problem. Research has focused on UC techniques with various degrees of near optimality, efficiency and ability to handle difficult con- straints. Exhaustive enumeration is the only technique that can find the optimal solution, because it looks at every possible solution combination. Although optimality is important, the computer execution time for this method is far too great to make it a feasible solution technique. Other techniques include priority list methods, which are simple and fast, but highly heuristic. Non- heuristic methods are Dynamic Programming and Branch-and-Bound. These methods are general and flex- ible, but as the size of the problem increases the compu- tation time becomes unrealistic. Between these two extremes, there is the Lagrangian Relaxation method, which seems to be a desirable compromise’. This method is efficient and well suited for large scale UC problems; however, the non-convexity of the UC problem may lead to an infeasible solution to the relaxed problem6. Since most methods require approximations to meet the size of the solution space, accuracy is sacrificed for speed and

cost. Unfortunately, these techniques typically reduce the ability of the algorithm to consistently find a good UC schedule.

UC is the problem of determining the optimal set of generating units, within a power system, to be used during the next one to seven days. The general UC problem is to minimize operational costs (mainly fuel cost), transition costs (start-up/shut-down costs), and no- load cost (idle, banking or standby).

The most computationally intensive part of a UC pro- gram is economic dispatch. Economic dispatch is the process of allocating the required load demand among generating units such that the cost of operation is mini- mum. Approximately seventy percent of the computer time in the UC program is used in the calculation of economic dispatch. The lambda-iteration method was used for the economic dispatch in the genetic-based UC program’.

The UC problem can be formulated as follows’:

U,,, + SUP,,, . U,, - (1 - U,,,_ ,) + SD0 WN,,, .

(1 - Unt) * ~?I,-11

with the following constraints:

Unit Coupling Constraints (for t = 1 to T) N

Cc un, - Pnd = Q Demand Constraint n N

~(U,,-Pmax,,)2D, + R, Capacity Constraint n N

c(Unt - Rsmax,) >R, System Reserve n Constraint

Individual Unit Constraints (for n = 1 to N)

P min, < Pnt < P max, Capacity limits of generating units when u,, = 1

IPnt - Pnt - 1 I G Raw+, Generator ramp rate limits

Variable definitions:

un, =

pm =

D, =

R, =

CT21 =

SUP,, =

SDOWN,,, =

up/down time status of unit n at time period t

U,, = 1 unit on u,, = 0 unit off

power generation of unit n in time period t

load level in time period t

system reserve requirements in time period t

production cost of unit n in time period t

start-up cost for unit n in time period t

shut-down cost for unit n in time period t

Genetic algorithm-economic dispatch example: G. B. Shebl6 et al. 341

MAINT,,, = maintenance cost for unit n in time period t

N = number of units

T = number of time periods

Pmin, = generation low limit in unit n

Pmax, = generation high limit of unit n

Rrmax, = maximum contribution to reserve for unit II

IV. Genetic algoriithm implementation The GA implementation consists of initialization, penalty methods for the unmet constraints, calculation of cost, elitism, reproduction, crossover, and mutation of the UC schedules. A flowchart of the algorithm is given in Figure 1.

Each part of the GA implementation can be explained as follows:

To initialize the population, a vector with dimension equal to the number of generators times the number of scheduling periods is randomly filled with zeros and ones. This UC schedule vector represents one member in the population. The vector initialization is then repeated for the number of members in the population. For this research, 9 generators and a scheduling period of 24 hours were used. The vector dimension, in this case, was equal to 216. This algorithm can be easily

Figure 1 GA flowchart

expanded to include the desired number of generators and hours. Penalty methods are an optimization technique that take a constrained problem and represent it with an unconstrained problem with an extra term in the objective function that assigns a cost to the solutions that are not within the constrained region’. The penalty method consisted of a constant multiplied by the generation. Thus in early generations the cost for violating a constraint was small, but in later genera- tions the cost was quite large. The penalty method was applied to the following constraints: minimum down- time, minimum up-time, system demand, too much generation, and spinning reserve. The calculation of cost of the UC schedule consists of:

(1)

(2) (3)

calculating start-up and shut-down costs for each unit; calculating maintenance cost for each unit; and calculating fuel cost for each UC schedule by running an economic dispatch algorithm for each hour and then summing this over the scheduling period for the total cost.

Elitism ensures that the best individuals are never lost in moving from one generation to the next. The best members of each generation are saved and copied into the next generation. Reproduction is the mechanism which the most highly fit members in a population are selected to pass on information to the next population of members. The fitness of each member is calculated by taking the inverse of the cost of each member’s UC schedule. Cheaper UC schedules result in a higher fitness. The strings that are kept for reproduction are determined by roulette wheel selection. Crossover is the primary genetic operator which promotes the exploration of new regions in the search space. Crossover is a structured, yet randomized mechanism of exchanging information between strings. Crossover begins by selecting at random two members previously placed in the mating pool during reproduction. A crossover point is then selected at random, and information from one parent, up to the crossover point, is exchanged with the other parent. Thus, creating two new members for the next generation. Mutation is generally considered a secondary operator. Mutation ensures that no string position will ever be fixed at a certain value for all time. Mutation operates by toggling, in a binary alphabet, any given string position with probability of mutation, P,,,.

V. GA parameters for unit commitment scheduling

Population size = 100 Maximum generations = 1000 Elitism number = 6 Probability of crossover = 0.65 Probability of mutation = 0.02

VI. Problems encountered for unit commitment scheduling by GA with penalty methods The main obstacle the GA encountered in solving the UC

342 Genetic algorithm-economic dispatch example: G. B. ShebItS et al.

scheduling problem was with the constraints imposed by the UC problem. It was hoped that by penalizing each constraint, the GA would separate and reproduce all schedules that had violated fewer constraints. It was thought that the GA would resolve the easy constraints (system demand, and ready reserve) in early generations and the more difficult constraints (minimum down-time and minimum up-time) in later generations. By doing so, each generation was expected to have a UC schedule with less violated constraints and with a higher fitness. But instead, the GA reproduced the UC schedules with violated constraints. This resulted in each generation having a population with similar fitness to the preceding generation. It seemed that there were too many constraints in the UC problem for the GA to decipher.

An example with the minimum down-time constraint will be presented to illustrate the problem encountered. A 4 generator, 6 hour UC schedule will be used for this example.

Generator 1 minimum down-time = 6 hours Generator 2 minimum down-time = 6 hours Generator 3 minimum down-time = 6 hours Generator 4 minimum down-time = 6 hours 0 = Generator off for given hour.

Before Crossover Member 1 Member 2 Hour 1 2 3 4 5 6 Hour 12 3456 Genl 000 000 Gen 1 111111 Gen2 000 000 Gen2 111111 Gen3 1 1 1 1 1 1 Gen 3 00 0000 Gen4 1 1 1 1 1 1 Gen 4 00 0000

1 = Generator on for given hour. Crossover_qoint = random-number * number_

After Crossover at hour 3 New Member 1 New Member 2 Hour 1 2 3 4 5 6 Hour 12 3456 Genl 111 000 Gen 1 00 0111 Gen2 11 1000 Gen 2 000111 Gen3 000 11 1 Gen 3 11 1000 Gen4 000 11 1 Gen 4 11 1000

hours = 0.5 * 6 = 3

Notice that before crossover the minimum down-time is satisfied for both members (UC schedules), but after crossover each member has broken the minimum down- time constraint 4 times. This is only one constraint, including the other constraints only compounds this problem. This example was chosen to illustrate the problem encountered, usually the UC schedule did not have some units on and some units off. A more typical GA population member (UC schedule) looked like Figure 2.

Another problem encountered was trying to decide the penalty assigned to each violation of a constraint. If a high penalty was assigned to a constraint, the constraint would probably be met. If a high penalty was assigned to system demand (making sure that each hour had enough generation to meet demand and reserves) the result was the GA just turned on all generating units for all hours. If a low penalty was assigned to a violation, the violation would be ignored. Many different penalty values were

tried for each constraint, but none ever resulted in allowing the GA to find a good or even an acceptable unit commitment schedule.

VII. Conclusion for unit commitment scheduling by GA with penalty methods A GA is a randomized optimization technique that closely emulates the natural process of evolution. The GA has been applied to a 9 generator 24 hour UC schedule. Penalty methods were used to enforce the following constraints: minimum down-time, minimum up-time, system demand, excess generation and spinning reserve.

The two main problems encountered when using a GA with penalty methods is that the crossover operator can introduce new constraint violations that were not in either parent, and selecting penalty values for 5 con- straints that interact is hopeless. These two problems resulted in each generation of population members having a similar fitness or similar unit commitment schedule cost as the preceding generation.

VIII. Economic dispatch The linearizations and assumptions made in the ED problem present a classical example. The Lagrangian approach to solving this problem uses approximations to limit its complexity. The loss of accuracy induced by these approximations is not desirable. Genetic algorithms (GA) can alleviate this undesirable trait by reproducing the optimization techniques found in nature. It does not require the strict continuity of classical search techniques, instead it allows nonlinearities and discontinuities to appear in the solution space. The application of this algorithm to the ED problem uses the payoff information of an objective function to determine optimality. As a result, any type of unit characteristic cost curve may be used to test for optimality6.

Besides reliability, cost is one of the most important aspects of the utility industry. Since loss of accuracy increases cost, especially with respect to the ED problem, it is beneficial to find new techniques which will solve the ED problem efficiently and accurately. At present, many search techniques find an accurate solution quickly, however there are limitations on the type of problem these methods are able to solve. Approximations need to be made to achieve the speed and accuracy of classical techniques. Since genetic algorithms (GA) search the entire solution space, it does not have the same limita- tions. Three new techniques are considered as possible approaches to incorporate GA into solving the ED problem. The results of each technique are compared against each other and more importantly against the classical Lagrangian search approach.

The standard economic dispatch problem can be described mathematically as follows:

c Pi+PL=P*L Hour 12 34 5 6 Gen 1 1 0 1 0 0 0 Gen2 1 1 0 0 1 0 Gen3 0 0 1 1 0 1 Gen 4 10010 1

Figure 2. Typical GA population member

Genetic algorithm-economic dispatch example: G. B. Sheblk et al. 343

where Pi = output generation for unit i, PL = total current system load and PTL = total system transmission losses.

Minimizing the objective function produces,

OBJ = C Fi(Pi)

where OBJ = objective cost function, Fi = cost function for unit i.

The economic dispatch problem is further constrained by the following equations:

Pi min <Pi <Pi max

where Pi min = minimum operating output of unit i, Pi max = maximum operation output of unit i.

The solution is restricted to the current on-line gener- ating units only’T1O.

IX. Example application of genetic algorithm for economic dispatch A genetic-based program has been designed to solve economic dispatch problems. A three-unit test system was obtained and used in the development of the pro- gram. There are two unique approaches to presenting the variables that are directly used to calculate the solution to this problem. Basically, the two techniques differ in the number and type of variables that are being represented in the chromosome string. In the first technique, the amount of generation provided by each of the three generators is represented in the chromosome string. The second approach represents the value of system lambda in the chromosome string. This directly leads to another significant difference between the two techniques; they have chromosome strings with a different number of bits per string. In the first method, the amount of generation from each independenl generator was represented in the chromosome string with 11 bits. With three generators, the total length of the string was 33 bits. For the second case, the value of syste:m lambda is represented with 11 bits. Since one third the number of variables are being represented, one third the length of the chromosome string needed to be used. The techniques are actually reciprocal processes, because both sets of values need to be determined to calculate the fitness of the solution. A thorough explanation of the first technique follows.

IX.1 Unit data The example system contains a set of three generators in a simple six bus power system7. For values of unit genera- tion Pi, a quadratic input-output curve data was

Table 1. Example unit data

obtained. Incremental cost curve data was obtained by taking the derivative of the unit input-output equation.

The contraints for these equations, as well as the unit operating ranges for this example, can be found in Table 1.

Cost per unit fuel is assumed to be the same for all the units, therefore we may equate thermal heat content used to actual generation costs for comparison issues’.

IX.2 Objective and fitness functions The economic dispatch problem can be solved by the genetic algorithm using the incremental cost curves. The curve solution uses the standard objective function and a penalty term for the conservation of energy constraint. Unit limits are automatically satisfied by a normalization process, which only allows solutions in the operating range of the units.

The constraint equation can be rewritten as:

Which when reduced to zero satisfies the original con- straint. The conditions for an optimum require:

dJ’i(Pi) ----= . . . dF#‘“) A E---Z dPi dP,

This can be handled in the same fashion as the con- servation of energy constraint. An error term is then introduced and a measure of this error is calculated. The constraint is met if each term in the equality is equal to the average lambda value for the string. Conse- quently, the error term is:

&r=CI&-Ail

where &,, = error term, Xi = incremental cost for unit i, Xavg = average incremental cost value of string.

Since a chromosome string’s fitness will be compared with the fitness value of all other strings within the same population, an absolute measure of optimal&y is not required. Likewise, the constraint equation error can be calculated as a comparison within the same population. The result is a fitness function based on a percentage rating. The following form was used:

%lerr = strin&, - ?qrr max,, - mm,,

where %l,, = percentage of string’s constraint error, s&in&, = string’s error in meeting constraint equation, n&r, and max,,, = minimum and maximum constraint error within population.

The fitness function objective then becomes:

OBJ = MIN[%X, - %lJ

Parameter Unit 1 Unit 2 unit 3

Maximum Minimum Input-Output Curve: Quadratic Linear Constant Incremental Cost Curve Linear

600 MW 400 MW 200 MW 1OOMW 100MW 50 MW

0.001562 0.00194 0.00482 7.92 7.85 7.97

561 310 78

0.003124 0.00388 0.00964

344 Genetic algorithm-economic dispatch example: G. B. Sheblk et al.

Since the fitness function is a raw value measure of the string’s fitness, we desire to give the best solutions the highest fitness values while minimizing the function above. The minimization is changed into a maximization by folding the fitness about the value one. If algorithm control or emphasis to certain problem objectives or constraints is desired, scaling factors are added. The final form of the fitness function becomes’ :

Fit = sfi * [(l - %X,,)(“P’)] + sf2 * [( 1 - %1,,,)@P2)]

where sf = scaling factor for emphasis w/in function, sp = scaling factor for emphasis over entire population.

Since the principles for the second technique are equivalent to the first, only a brief explanation of the intricacies of the second technique follow.

The second technique borrows its representation from the Lagrangian search technique. For example, the GA searches for a value of system lambda between calculated minimum and maximum values rather than searching for the output of each generator. The value of system lambda is then substituted into the following equation:

dq;i(f’i) -= . . . dF,PJ x

Z-----E

dPi dP,

After solving for the value of Pi for all three generators, the total power output can be calculated:

c Pi = PTL

where Pi = output generation for unit i, PTL = total system transmission losses.

In the first technique, two constraints were simulta- neously dictating the objective function of each string. The first constraint was that the total load should be 850 MW. The second constraint included either the aver- age system lambda or the cost of the system. The greater the violation from the average system lambda or the greater the cost of the system, the less fitness given to that string. The second method eliminates the need for the second set of constraints. Since the system lambda dic- tates the generation levels of the three generators, it will always equal the average. Also since the optimal system lambda always finds the most cost effective solution, the second version of the second constraint is also unneces- sary. The only important constraint is that the total load equal 850 MW. Only one constraint needs to be met for this case:

PTL = LOAD

As in the first technique, the constraint can be calcu- lated as a comparison within the same population. There- fore, violation from the above constraint represents the objective function:

OBJ=lCPi-LOAD)

Since the fitness function is a raw value measure of the string’s fitness, we desire to give the best solutions the highest fitness values when minimizing the function above. The minimization is changed into a maximization by inverting the objective function. In order to distinguish clearly between the best and worst solution, a penalty multiplier is included in the fitness function. The penalty is simply mutliplied by one over the objective function to get

the final form of the fitness function:

Fit = & * (Penalty)

X. Refined genetic algorithm The genetic-based program chosen for analysis is RGA. Most of the refined genetic algorithm subroutines mimic the subroutines in David Goldberg’s simple genetic algorithm (SGA) program2, however the reproduction operators, crossover and mutation, differ between pro- grams. Three other differences between the programs also exist. They are the use of elitism, along with the changing probabilities of mutation and crossover occurrence.

The first difference occurs in the crossover technique. When crossover is deemed necessary, a binary string that is the same length as the population strings is created. This string is used to cue the two parent strings as to whether the child string will get its bit values from the first or second parent. For example, if the random strings bit value in the first string position is a ‘l’, the first parent gives its bit value to the child, however if it is a ‘0’ the second parent donates its bit value in the fist string position. This process is continued for every bit in the string. In order to create the second child, a complement of the pattern is followed. To illustrate this procedure:

Pattern String: 101001 Parent 1: 000010 Parent 2: 111001 Child 1: 010000 Child 2: 101011

After crossover is completed, mutation is performed. The random number generator is called to determine if mutation is necessary for the first bit in the string. If it is necessary, the present bit value is complemented at that location. For example, if the first bit in the string was originally a ‘l’, the bit would change to a ‘0’ during mutation. The process continues for each remaining bit in the string and for each string in the population.

RGA uses elitism. Elitism is a technique used to create early convergence by ensuring the survival of the most fit strings in each population. Elitism compares the results of the most recent population to the best previous popula- tion. It then combines the two populations and deter- mines the best results from both populations in order of decreasing fitness value. If a duplication is found, elitism eliminates this duplication. This combination of the most fit strings becomes the ‘best previous’ population. The process continues for each generation so that accuracy and convergence capability can be maintained in this algorithm2.

The final difference between RGA and SGA is the changing probabilities of mutation and crossover occur- rence. In SGA, a probability of mutation and crossover is entered at the beginning of the program. This percentage remains the same throughout the entire run of the program. On the other hand, RGA changes these probabilities for each generation. Initially a probability for each is entered. For every generation thereafter, the probability of crossover is exponentially decreased while the probability of mutation is exponentially increased. Limits are set so that the probabilities do not exceed specified standards.

Both solution techniques, discussed previously, use the

Genetic algorithm-economic dispatch example: G. B. Shebl6 et al. 345

RGA program. Several modifications needed to be made between the programs in order to compensate for the different strings lengths and objective functions being used.

Xl. Solutions and performance of genetic algorithm for ecolnomic dispatch For each data set case, five separate runs were made to test the efficiency and accuracy of the program using different intial random seed numbers. The cases consid- ered were:

(a) generator appro:ximation for RGA; (b) general lambda approximation for RGA; (c) modified lambda approximation for RGA.

The initial parameters for all cases were:

100 -Generations 100 -Strings per generation 11 -Number or b.its for string representation 0.75 -Percentage occurrence of crossover 0.01 -Percentage occurrence of mutation 25 -Load constraint scaling factor 15 -Cost scaling factor 3 -Scaling power factor

For each case, a table was created showing the absolute value of the difference of the value of the best string’s solution in the final generation compared with the opti- mal solution for all five runs. For example, if the total generation from the system was 849.2MW and the required load was 850 MW, the entry in the table would be ( 1850 - 849.2 I) = 0.8 MW. The optimal solution obtained from a verincation program using classical techniques is illustrated in Table 2. The requirement for a successful solution is to have each generator produce power within 1 MW of the optimal solution and to have total load be within 1 MW as well. By illustrating the actual generation difference from the required load, it can be easily determined if a successful solution has been found.

XI. I Comparison requ,irements Besides accuracy, speed is very important. The quicker it is to find an optimal solution, the less money expended on inaccurate solutions. At present the Lagrangian search technique takes between 10 and 20 seconds to solve the ED problem. However!, this technique can sacrifice some accuracy because it can only solve for functions that are monotonically increasmg. Since nature does not always react in such a structured way, approximations need to be made. It would be beneficial to find a technique that can be solved just as quickly and accurately, using fewer approximations, as the Lagrangian method. This is a difficult task. If accuracy is greater than the results found using the classical model, an increase in time might be justifiable. Furthermore, with the use of multiprocessors, the time it takes a genetic-based program to solve the ED program can be drastically reduced and can become easily comparable to the time it takes to solve the problem using classical approaches.

XI .2 Generator approximation for RGA The solution results using the RGA program and gen- erator approximation are illustrated in Table 3.

This technique does not appear to be a good substitu- tion for the Lagrangian method. Although it does not

Table 2. Classical solution

Load unit 1 Unit 2 unit 3 (MW) (MW) (MW) (Mw) Lambda

850 393.2 334.6 122.2 9.148248

Table 3. Generator approximation for RGA

Unit 1 Unit 2 unit 3 Total

7.66 1.64 3.35 2.67 2.35 1.29 0.39 0.68 3.27 0.47 2.83 0.04 1.13 2.75 2.98 1.36 0.4 2.07 1.15 0.47

Average lambda

0.0048 0.0006 0.0061 0.0071 0.0014

Table 4. General lambda approximation for RGA

unit 1 Unit 2 unit 3 Total Lambda

0.088 0.099 0.065 0.252 0.00023 0.09 0.146 0.011 0.247 0.00015 0.088 0.099 0.065 0.252 0.00023 0.09 0.146 0.011 0.247 0.00015 0.088 0.099 0.065 0.252 0.00023

have the same nonlinearities and discontinuities limita- tions, it is still unable to find an accurate solution con- sistently. Besides this fact, it took nearly a minute and a half to converge on an ideal solution. This does not compare well with the lo-20 second convergence rate of the Lagrangian technique.

XI .3 General lambda approximation for RGA The solution results using the RGA program and general lambda approximation are illustrated in Table 4.

Each time the program was run, a successful yet unique solution was found. This method seems to find a success- ful solution 100% of the time. This is a great improve- ment to the success rate of the first RGA program.

Along with the accuracy advantage several other advantages of using the second technique also exist. There are fewer constraints in the new technique. Two objective function constraints are used in the first program whereas only one is needed in the second. The fitness function of the system is also much easier in the second technique. Since the second method only needs the amount of deviation from the load constraint for its objective function, it can easily be minimized by taking the reciprocal value of the error and calling that the string’s fitness. The original program’s fitness value is far more difficult to calculate. The above advantages make the new method a lot easier to understand and duplicate. Another advantage is that the string length is smaller so the number of times the random number generator is called is much less in the new method. This directly leads to the greatest advantage of the new method. The reduc- tion in time it takes for the system to run. This method takes about l/3 the time to run as the first technique.

It still needs about 30 seconds to calculate a solution. Since the entire solution space can be searched and accuracy should improve, the added time seems to be a

346 Genetic algorithm-economic dispatch example: G. B. Sheblk et al.

Table 5. Modified lambda approximation for RGA

Unit 1 Unit 2 Unit 3 Total Lambda

0.03 0.004 0.026 0 0.00001 0.032 0.002 0.026 0.004 0.00001 0.028 0.006 0.027 0.005 0.00002 0.139 0.005 0.027 0.002 0.00002 0.139 0.005 0.027 0.004 0.00002

feasible trade-off for solving this problem compared with the classical approach.

Xl.4 General lambda approximation for RGA One other approach that combines both Lagrangian search and lambda approximation in GA was attempted. This method follows the same procedure as the general lambda approximation method. However, one difference does exist. In the general method, the values for the minimum and maximum system lambda are held con- stant. The range of allowable lambda values is quite large so it takes a considerable amount of time for the strings to settle in on the optimal system lambda value. Depending on the violation from the load constraints, the value of lambda in the Lagrangian method would either increase or decrease. This modified method borrows this idea and changes the value of the minimum and maximum system lambda depending on the constraint violation. The solu- tion results using the RGA program and modified lambda approximation are illustrated in Table 5.

This approach also finds the optimal solution 100% of the time. It finds this optimal solution within 2-10 generations. Since the optimal solution is found in so few generations, the actual time it takes to converge on the optimal solution is 12 seconds on an IBM Personal Computer (Model 80 386). This is equivalent to the response time of the Lagrangian method. One advantage, however, is that the program continues to search the space to make sure that the program has found the true minimum cost solution, not a local minimum. This modified method appears to be a good substitution for the general Lagrangian search method.

XII. Conclusions of genetic algorithm for economic dispatch Simply comparing the two RGA techniques, the lambda approximation technique is a more accurate and quicker technique to use to solve the ED problem than the generator approximation technique.

Since the Lagrangian search method is the classical technique considered, the comparison of the remaining genetic-based programs should be made against it. As far as accuracy is concerned, all three techniques are compa- tible. Since the general lambda approximation technique can search the entire solution space without making approximations, its solution can be considered more accurate than the Lagrangian approach in highly non- linear power systems. Since the minimum and maximum

possible system lambda values collapse on the lambda value, the modified lambda approximation approach like the strict Lagrangian search might miss the minimum optimal solutions because the search space has been too greatly restricted. One approach might be to allow the system lambda values to collapse into a still relatively large range (limited by a f 50 MW constraint from the load). This range would still be smaller than the range used in the above method. This modification should allow for faster convergence while still producing accurate solutions.

If time is the greatest concern, the modified lambda approximation technique would be the best option. If the solution space is highly nonlinear and incongruent, the general lambda technique can be considered the best option. As previously discussed, a modified lambda approximation program, with a large range of values for system lambda, might be the best solution overall. Time will increase, but so will accuracy.

Regardless, genetic algorithms seem to be a good substitution for the Lagrangian search method because of its speed, accuracy, and ability to search solution spaces that are highly nonlinear and discontinuous like those observed in nature.

XIII. References 1

2

3

4

5

6

7

8

9

10

11

Walters, D C and Sheblk, G B ‘Genetic algorithm solution of economic dispatch with valve point loading’, IEEE Trans. Power Syst. (1993) pp 1325-1332

Goldberg, D E Genetic algorithms in search, optimization, and machine learning Addison-Wesley Publishing Company, Inc., Reading, Massachusetts (1989)

Soucek, B Dynamic, genetic, and chaotic programming John Wiley and Sons, Inc., New York (1992)

Koza, J Genetic programming MIT Press, Cambridge, Massachusetts (1992)

Zhuang, F and Galiana F D ‘Towards a more rigorous and practical unit commitment by Lagrangian relaxation’ IEEE Trans. Power Syst. Vol PWRS-3 No 2 (1988) pp 763-770

Fahd, G and SheblC, G B ‘Unit commitment literature synopsis’ ZEEE Trans. Power Syst. (1994) pp 128-135

Wood, A and Wollenberg, B Power generation operations and control John Wiley and Sons, New York (1984)

Sheblb, G B ‘Unit commitment for operations’ PhD. Dis- sertation, Virginia Polytechnic Institute and State University (1985)

Luenberger, D G Introduction to linear and non-linear pro- gramming Addison-Wesley Publishing Company, Reading, Massachusetts (1984)

Gross, C A Power system analysis John Wiley & Sons, New York (1986)

Noyoloa, A H, Grady, W M and Viviani, G L ‘An optimized procedure for determining incremental heat rate character- istics’ IEEE Trans. Power Syst. Vol5 (1990) pp 376-383