optimization problem of complex system under uncertainty

9
Computers Chem. Engng Vol. 22, No. 78, pp. 10071015, 1998 ( 1998 Elsevier Science Ltd All rights reserved. Printed in Great Britain PII: S0098-1354(97)00266-4 00981354/98 $19.00#0.00 Optimization problem of complex system under uncertainty G.M. Ostrovsky,* Yu.M. Volin and D.V. Golovashkin Karpov Institute of Physical Chemistry, Vorontsovo Pole 10, Moscow 103064, Russia (Received 27 June 1996; received in revised form 8 October 1997) Abstract Design of chemical processes (CP) is usually performed in a condition of some uncertainty of original physical and chemical information. In connection with this the important problem of the evaluation of flexibility of CP (the ability of CP to preserve its capacity for work) arises. Halemane and Grossmann (1983) (Optimal process design under uncertainty. A.I.Ch.E. J., 29 (3), 425433.) introduced the flexibility (feasibility) function which permits evaluation of CP flexibility. But direct calculation of the flexibility function is reduced to solving very hard nondifferentiable and multiextremal optimization problems. In connection with this we give an effective method of calculation of the flexibility function. It is reduced to an iteration procedure on each iteration of which usual nonlinear programming problems are solved. On the basis of this method we consider two algorithms for solving the two stage optimization problem. ( 1998 Elsevier Science Ltd. All rights reserved Keywords: optimization; chemical processes; flexibility of chemical processes 1. Introduction Quite commonly there is some uncertainty in the original physical, chemical, technological and econ- omical information used in the design of chemical processes. On the other hand, the design of a chemical process must guarantee preservation of the capacity of the chemical process during its operation (Gros- smann and Sargent, 1978; Halemane and Grossmann, 1983; Pistikopoulos and Grossmann, 1989; Var- varezos et al., 1995). We consider a new approach to the problem. Usually, the optimization problem of a complex system design is of the following form: min d,z|Z f (d, z, h), (1) t j (d, z, h))0, j"1, 2 , m, (2) where d is a vector of design variables, z (z3Z) is a vector of control variables, Z is an admissible region of the variables z and h is a vector of uncertain parameters. The last vector includes coefficients used in mathematical models and parameters which char- acterise external streams. * Corresponding author. Let ¹ be the region of uncertain parameters: ¹"Mh: h M )h)h M N. Constraints (2) are design speci- fications or physical operating limits. A violation of the constraints during process operation is inadmissible. Therefore, the constraints characterise the capacity for work of a chemical process. We cannot solve directly problem (1), (2), since we do not know the exact values of the parameter h. In connection with this one usually gives some approx- imate (nominal) values to the uncertain parameters and solves the problem. The drawback of this ap- proach consists in that it does not ensure satisfaction of all the constraints for possible variations of internal and external factors. This requires taking into account uncertainty of the parameters in the optimization problem formulation. Halemane and Grossmann (1983) gave the mathematical formulation of the op- timization problem under uncertainty. It is based on the conception of the following two stages in the ‘life’ of CP: a design stage and an operation stage. Here the following assumptions are used: 1. At the operation stage the control variables can be used for satisfaction of constraints (2). 2. Constraints (2) must be satisfied for any values of the parameters h. The mathematical formulation of the optimization problem under uncertainty (Two Stage Optimization 1007

Upload: gm-ostrovsky

Post on 02-Jul-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Optimization problem of complex system under uncertainty

Computers Chem. Engng Vol. 22, No. 7—8, pp. 1007—1015, 1998( 1998 Elsevier Science Ltd

All rights reserved. Printed in Great BritainPII: S0098-1354(97)00266-4 0098—1354/98 $19.00#0.00

Optimization problemof complex system under uncertainty

G.M. Ostrovsky,* Yu.M. Volin and D.V. Golovashkin

Karpov Institute of Physical Chemistry, Vorontsovo Pole 10, Moscow 103064, Russia

(Received 27 June 1996; received in revised form 8 October 1997)

Abstract

Design of chemical processes (CP) is usually performed in a condition of some uncertainty of originalphysical and chemical information. In connection with this the important problem of the evaluation offlexibility of CP (the ability of CP to preserve its capacity for work) arises. Halemane and Grossmann (1983)(Optimal process design under uncertainty. A.I.Ch.E. J., 29 (3), 425—433.) introduced the flexibility (feasibility)function which permits evaluation of CP flexibility. But direct calculation of the flexibility function is reduced tosolving very hard nondifferentiable and multiextremal optimization problems. In connection with this we givean effective method of calculation of the flexibility function. It is reduced to an iteration procedure on eachiteration of which usual nonlinear programming problems are solved. On the basis of this method we considertwo algorithms for solving the two stage optimization problem. ( 1998 Elsevier Science Ltd. All rightsreserved

Keywords: optimization; chemical processes; flexibility of chemical processes

1. Introduction

Quite commonly there is some uncertainty in theoriginal physical, chemical, technological and econ-omical information used in the design of chemicalprocesses. On the other hand, the design of a chemicalprocess must guarantee preservation of the capacityof the chemical process during its operation (Gros-smann and Sargent, 1978; Halemane and Grossmann,1983; Pistikopoulos and Grossmann, 1989; Var-varezos et al., 1995). We consider a new approach tothe problem. Usually, the optimization problem ofa complex system design is of the following form:

mind,z|Z

f (d, z, h), (1)

tj(d, z, h))0, j"1,2, m, (2)

where d is a vector of design variables, z (z3Z) isa vector of control variables, Z is an admissible regionof the variables z and h is a vector of uncertainparameters. The last vector includes coefficients usedin mathematical models and parameters which char-acterise external streams.

*Corresponding author.

Let ¹ be the region of uncertain parameters:¹"Mh: hM )h)hM N. Constraints (2) are design speci-fications or physical operating limits. A violation of theconstraints during process operation is inadmissible.Therefore, the constraints characterise the capacity forwork of a chemical process.

We cannot solve directly problem (1), (2), since wedo not know the exact values of the parameter h. Inconnection with this one usually gives some approx-imate (nominal) values to the uncertain parametersand solves the problem. The drawback of this ap-proach consists in that it does not ensure satisfactionof all the constraints for possible variations of internaland external factors. This requires taking into accountuncertainty of the parameters in the optimizationproblem formulation. Halemane and Grossmann(1983) gave the mathematical formulation of the op-timization problem under uncertainty. It is based onthe conception of the following two stages in the ‘life’of CP: a design stage and an operation stage. Here thefollowing assumptions are used:

1. At the operation stage the control variables can beused for satisfaction of constraints (2).

2. Constraints (2) must be satisfied for any values ofthe parameters h.

The mathematical formulation of the optimizationproblem under uncertainty (Two Stage Optimization

1007

Page 2: Optimization problem of complex system under uncertainty

Problem — TSOP) has the following form (Halemaneand Grossmann, 1983):

mind

EhM f *(d, h)N, (3)

F1(d))0, (4)

where EhM2N is a mathematical expectation withrespect to the parameters h, f *(d, h) is obtained bysolving the problem:

f *(d, h)"minz|Z

f (d, z, h),

tj(d, z, h))0, j"1,2, m.

F1(d) is the feasibility function:

F1(d)"max

h|Tminz|Z

maxj|J

tj(d, z, h),

J"(1,2, m). (5)

Inequality (4) is the necessary and sufficient conditionfor the feasibility of design d. By using the discretiz-ation technique (Halemane and Grossmann, 1983)one can transform problem (3), (4) to the form:

f1"min

d,zi+i|I1

f (d, zi , hi), (6)

tj(d, zi, hi))0, i3I

1(7)

F1(d))0, (8)

where I1

is a set of numbers of approximation pointshi, S

1"Mhi: i3I

1N is a set of approximation points,

zi is a vector of the control variables corresponding toa point hi, and wi are weight coefficients. The feasibil-ity function can be represented in the following form:

F1(d)"max

h|Ts(d, h),

where

s(d, h)"minz|Z

maxj|J

tj(d, z, h). (9)

Condition (8) is equivalent to the inequality:

s(d, h))0, ∀h3¹. (10)

At first, we consider the method of the feasibilityfunction calculation.

2. Calculation of feasibility function

Halemane and Grossmann (1983) showed that gen-erally speaking s(d, h) is a nondifferentiable and multi-extremal function with respect to the variables h. It isknown, that the optimization problem of such func-tions is a very hard one. Let the functions t

j(d, z, h)

satisfy the following condition:

Condition 1. ¹he functions tj(d, z, h) ( j"1,2, m)

are jointly convex in z and h.

For this case Halemane and Grossmann (1983)proved that the global solution of the problem mustlie at one of the vertices of the parameter set ¹. Fromhere F

1(d) can be determined by calculation s(d, h) at

every vertex, so as to select the largest value. Thisapproach has a drawback, because its computationaleffort is in general proportional to the number r ofvertices, 2r. In connection with this Grossmann andFloudas (1987) proposed the active constraint sets(ACS) approach. This approach uses the fact thatunder certain conditions the number of active con-straints in problem (9) is equal to q#1, where q is thenumber of control variables. This permits reducing ofproblem (5) to identification of all possible activeconstraints sets in the problem and solving the follow-ing non-linear programming problem for each activeset AS(k) (k"1,2, n

AS)

maxh|T, z|Z

u,

tj(d, z, h)!u"0, j3AS(k), (11)

where nAS

is the number of potential active sets.Let the functions t

j(d, z, h) satisfy the following

condition:

Condition 2. ¹he functions tj(d, z, h) are quasi-con-

cave in z and h and strictly quasi-convex in z for fixed h.

In this case problem (11) has an unique local max-imum which is the global solution of the problem(Grossmann and Floudas, 1987). The approach canrequire lots of computational effort if the number ofactive sets is large. Here we consider new approachesto the calculation of F

1(d) and solving the two stage

optimization problem.The following statement will be necessary later.

Statement 1. Consider the problem:

f *"maxx

f (x)

maxy|Y

gi(x, y)*0, i"1,2, m. (12)

Let x*, yi* be the solution of the problem (yi*"arg max

y|Ygi(x*, y)). Problem (12) is equivalent to the

following one:

f " maxy1,2, ym

maxx

f (x)

gi(x, yi)*0, i"1,2, m. (13)

Now we consider the auxiliary problem:

fM (y1,2, ym)"maxx

f (x)

gi(x, yi)*0, i"1,2, m, (14)

1008 G.M. OSTROVSKY et al.

Page 3: Optimization problem of complex system under uncertainty

where yi (i"(1,2, m)) are parameters.Since

maxy|Y

g(x, y)*g(x, y) ∀y3½

then the following inequality holds

f **fM (y1,2, ym) ∀yi3½. (15)

It is clear that

fM (y1*,2, ym*)"f *, (16)

It follows from Eqs. (15) and (16) that

maxy1,2, ym

fM (y1,2, ym)"f *. (17)

From this follows the correctness of Statement 1.Problem (5) can be represented in the form:

F1"max

h|T,kk

minz|Z

maxj|J

tj(d, z, h)*k.

The problem is equivalent to the following one:

F1"max

h|T,kk

maxj|J

tj(d, z, h)*k ∀z3Z. (18)

It is the problem with an infinite number of con-straints. It will be solved as follows. We introducea finite set Sp

3of z — critical points zi:

Sp3"Mzi; i"1,2, pN, (19)

where p is the number of the points zi in the set Sp3.

Consider the problem:

Fp1,k

p"max

h|T,kk

maxj|J

tj(d, zk, h)*k ∀zk3Sp

3. (20)

We can think of index j as a discrete variable. Then byusing the above-mentioned statement we can reduceproblem (20) to the problem:

Fp1,k

p" max

j1|J,2, jp|Jmaxh|T,k

k

tji(d, z1, h)*k

2

tjp(d, zp, h)*k. (21)

Since the region of the admissible values of the vari-ables h, k in problem (20) is larger than in problem(18) then the following relation holds:

Fp1*F

1, (22)

i.e. Fp1is an upper estimation of the value F

1. Let k

p, hp

be the solution of problem (21). Problem (21) isa mixed-integer nonlinear programming problem inwhich variables j

1,2, j

pare discrete and variables k. h

are continuous. Two approaches will be consideredfor calculation of Fp

1. In the first approach we will use

a two-level optimization method. Optimization withrespect to the discrete variables will be performed onthe upper level and optimization with respect to thecontinuous variables will be performed on the lowerone. If we use the enumeration method on the upperlevel then solving problem (21) will be reduced tosolving mp problems with continuous variables k, h

maxh,k

k

tj1(d, z1, h)*k

2

tjp(d, zp, h)*k. (23)

If the numbers m and p are large then the procedurecan become very laborious. In this case it is reason-able to use on the upper level the one-at-a-timeor sectioning method (S method) (Beveridge andShechter, 1970). In this method by keeping p!1 ofthe p variables j

1,2, j

pfixed at some level the remain-

ing one is changed from 1 to m. An operation consist-ing of solving the following p optimization problems;

maxjk|J

maxh,k

k

tji(d, z1, h)*k

2

tjp(d, zp, h)*k (24)

in turn for k"1,2, p. It will be called a cycle of theS method. In problem (24) the variablesj1,2, j

k~1, j

k`1,2, j

pare fixed and the variable j

kis

a varied one. We shall designate by Fp,(q)1

, k(q)p

the valueof Fp

1and k after a performance of q cycles

(k(q)p"Fp,(q)

1). By using the enumeration method we

reduce solving problem (24) to solving m problems(23) for j

k"1, 2,2, m. From here it is necessary to

solve mp problems (23) for the performance of onecycle of the S method. It is clear that

Fp,(q)1

)Fp1. (25)

If the S method finds the global maximum, the follow-ing equality will hold:

limq?=

Fp,(q)1

"Fp1.

Now we consider the second (one level) approach tosolving problem (21). For solving problem (21) we willuse the group sectioning method (GS method) whichis an extension of the one-at-a-time or sectioningmethod (Beveridge and Shechter, 1970). Here the

Computer system under uncertainty 1009

Page 4: Optimization problem of complex system under uncertainty

search variables are divided into two groups. The firstgroup contains the discrete variables j

1,2, j

pand the

variable k, and the second group contains the vari-ables h, k. It should be noted that the variable k be-longs to both groups. The GS method is reduced to aniterated (cyclic) procedure on each iteration of whichtwo operations are performed. At first the variablesj1,2, j

pare fixed at some level and maximization with

respect to the variables hiand k is performed. Then

maximization of the objective function is performedwith respect to the discrete variables j

1,2, j

pand the

variable k, while the variables hitake the values ob-

tained as a result of the first operation execution.Thus, the first operation at the qth iteration is of theform:

kN (q`1)p

"maxh,k

k

tj(q)1

(d, z1, h)*k

2

tj(q)p

(d, zp, h)*k, (26)

where the variables j(q)1

,2, j(q)p

were found at the(q!1)-th iteration. Let h(q`1) be the solution of theproblem. At the second step we solve the problem:

maxj1|J,2, jp|J,k

k

tj1

(d, z1, h(q`1))*k

2

tjp

(d, zp, h(q`1))*k. (27)

It should be noted that each constrainttji(d, zi, h(q`1))*k depends only on the single vari-

able ji. Therefore solving problem (27) is reduced to

the following procedure. At first we find the optimalvalue j(q`1)

1,2, j(q`1)

pof the discrete variables j

1,2, j

pby solving the following p problems:

tM i"maxj

tj(d, zi, h(q`1)) (28)

for i"1,2, p. Later we find the optimal valuek(q`1)p

of the variable k:

k(q`1)p

"mini

tM i (29)

We use the vector h(q) obtained at the (q!1)-th iter-ation and the value k(q`1)

p(q'1) as the initial values

for the vector h and the variable k in problem (26). Atthe first iteration an initial value of the vector h isgiven by the user and an initial value of k is calculatedusing formula (29). It is easy to see that such a choiceof the initial values ensures satisfaction of the con-straints for the initial points of the problem (26) for allthe iterations (q*1). At the (q#1)-th iteration wesolve problem (26) again using the values of the dis-crete variables j(q`1)

1,2, j(q`1)

pwhich were obtained at

the qth iteration and so on. It should be noted thatusually inequalities (2) are constraints on output vari-

ables of CP. Therefore, one calculation of a math-ematical model of CP gives the values of all theleft-hand side functions of constraints (2). Conse-quently, after solving problem (26), all the valuestji(d, zi, h(q`1)) (i"1,2, p; j"1,2, m) in the opti-

mal point h(q`1) are known. Hence, problem (28) isreduced to a simple problem of determination of themaximal number from the known set of m numbers.Thus, with the help of the GS method, we reduced thecomplex mixed integer optimization problem to theiteration procedure of the type:

j(q`1)"f ( j(q)), q"1, 2,2 (30)

at each iteration of which we have to solve the usualproblem of nonlinear programming (26) and p simpleproblems of determination of a maximal number froma set of m numbers. The iteration procedure (30)comes to an end if the conditions:

j(q`1)i

"j(q)i

, i"1,2, p

hold. We shall show that the iteration procedure willalways converge. Indeed, let us consider the sequenceof the numbers kN (q)

p(q"1, 2,2) (see (26)). Comparing

problem (21) and (26) we obtain easily

kp*kN (q)

p∀q.

Since in the initial point of the problem (26) all theconstraints are satisfied then we have

kN (q`1)p

*k(q)p

.

On the other hand, it follows from operation (28), (29)the inequality:

k(q)p*kN (q)

p.

Hence we have

kN (q`1)p

*kN (q)p

(q"1, 2,2).

Thus the monotonic sequence of the numberskN (q)p

(q"1, 2,2) is bounded above. Such a sequencehas always a limit. Consequently, the iteration pro-cedure converges always.

Each of the approaches has its advantages. Thesecond method obtains, as a rule, a quick solution;however, it often determines a local maximum ofproblem (21). The first approach requires essentiallylarger computational expenditures but it is more re-liable for the determination of the global maximum.From here we suggest an approach which uses ele-ments of both the approaches. Each iteration of thealgorithm will consist of two steps. The first step willcorrespond to an execution of one cycle of the firstapproach, while the second one will correspond to anexecution of the complete procedure of the secondapproach.

Let us consider the problem:

l(d, h)"minz|Z

maxj|J

tj(d, z, h).

1010 G.M. OSTROVSKY et al.

Page 5: Optimization problem of complex system under uncertainty

The value l(d, h) can be determined by solving theproblem

l(d, h)"minz|Z,u

u

tj(d, z, h))u. (31)

The following relation holds:

lp"l(d, hp))max

hl(d, h),F

1.

Consequently, we have

lp)F

1)k

p. (32)

It is clear that if the following inequality holds:

kp!l

p)e, (33)

where e is a sufficiently small value then the value

FI1"1

2(k

p#l

p) (34)

is a good approximation of the value F1. The more

densely the set S(p)3

covers the region Z the more thesolution of problem (21) is close to the solution ofproblem (18) (or (5)). From here, we suggest, the algo-rithm of solving problem (18) should be used based ona permanent extension of the set of z-critical pointsand the use of the solution of problem (21).

Algorithm 1.Step 1. Set p"1. Choose the set Sr

3of z-critical

points, where r is the number of critical points z in theset Sr

3.

Step 2. Solve problem (21).Step 3. Solve problem (31) for h"hp. Let zp be the

solution of the problem.Step 4. Check condition (33). If the condition is

satisfied the solution of problem (5) is obtained, other-wise go to step 5.

Step 5. Construct the set:

Sr`p3

"Sr`p~13

X MzpN. (35)

Set p"p#1 and go to the Step 2.

Now we shall show that Algorithm 1 always con-verges. Indeed, set e"0 in Eq. (33). In this case,generally speaking, the number of iterations will beinfinite. Let S*

3be the limiting set of z-critical points:

S*3"

=6

p/p0

Sp3. (36)

Thus for sufficiently large p (p*pN ), the sets Sp3

andS*3

will be sufficiently close i.e. the following inequalitywill be hold:

∀zi3S*3, &zj3Sp

3, r(zi, zj))e, (37)

where e is a sufficiently small value, r(zi, zj) is thedistance between the points zi and zj. It is clear thatr(zi, zj)P0 if pPR. Let the condition p*pN be satis-

fied for some p. Consider the following three prob-lems:

g1"min

z|Sp3

maxj|J

tj(d, z, hp), (38)

g2"min

z|S*3

maxj|J

tj(d, z, hp), (39)

lp"min

z|Zmaxj|J

tj(d, z, hp), (40)

Since at least one constraint must be active in prob-lem (20) then we have

g1"k

p. (41)

Let zp be the solution of problem (40). In accordancewith the way of construction of the set S*

3(see

Eqs. (35) and (36)) the point zp belongs to the set S*3.

Since S*3LZ, then the minimum in problem (39)

coincides with the minimum in problem (40), fromhere g

2"l

p. Since the sets Sp

3and S*

3are sufficiently

close (see Eq. (37)) then the minima of problems (38)and (39) are sufficiently close. Consequently, we have

kp!l

p)e

1,

where e1

is sufficiently small value. It is clear that ifr(zi, zj)P0 then (k

p!l

p)P0. Thus the statement is

proved.

3. Method of solving the two stage optimizationproblem

For the case when the functions tj(d, z, h) are joint-

ly convex in z and h. Halemane and Grossmann (1983)proposed a method of solving the TSOP, which re-quires calculation of s(d, h) at many vertices of ¹. Thedrawback of this algorithm is that the number ofvertices increases exponentially with the number ofparameters h

i. Of course, if the functions t

j(d, z, h) are

not jointly convex then the solution of problem (5)cannot correspond to a vertex.

Using the ACS approach Pistikopoulos and Gros-smann (1989) proposed a method of solving a retrofitdesign problem which is close to problem (3), (4). Itreduces the original problem to a single nonlinearprogramming problem. The drawback of this methodis that the number of possible active constraints maybe large. In this case, the problem may become a verylarge nonlinear programming problem. Varvarezos etal. (1995) presented a novel approach for flexibilityevaluation and design of linear processes. It is basedon sensitivity analysis and linear programming.

Later, we shall use the solution of the followingauxiliary problem:

f (k)1"min

d,zi+i|I1

wif (d, zi, hi) (42)

t(d, zi, hi))0, i3I1

(43)

t(d, zj, hj))0, j3I(k)2

(44)

Computer system under uncertainty 1011

Page 6: Optimization problem of complex system under uncertainty

where S(k)2"Mhj, j3I(k)

2N is some set of points which we

call h-critical points, I(k)2

a set of numbers of h-criticalpoints, k a number of an iteration in an iterationprocedure which will be described later. Now we givesome auxiliary relations. We showed (Ostrovsky et al.,1997) that the following inequality holds:

f (k)1)f

1. (45)

We showed also that if the following inequality

F1(d(k)))0

holds then the solution of problem (42)—(44) is thesolution of the problem (6)—(8). Now we shall describean algorithm of solving TSOP. Each iteration of thealgorithm will consist of two steps. In the first step thelower estimation of the optimal value of the objectivefunction of the TSOP will be determined by solvingproblem (42)—(44). In the second step some ‘suspi-cious’ h-critical points will be added into the set S(k)

2.

The algorithm of solving the TSOP is representedbelow.

Algorithm 2.Step 1. Set k"1. Choose a set S

1of approximation

points, an initial set S(0)2

of h-critical points, assigninitial values d, zi(i3I

1), zj( j3I(0)

2)

Step 2. Solve problem (42)—(44). Let d(k), zi,(k) (i3I1),

zj,(k) ( j3I(k~1)2

) be the optimal values of the searchvariables.

Step 3. Solve problem (5). Let h(k) be the solution ofproblem (5).

Step 4. Check the condition

F1)0. (46)

If this condition is satisfied the solution of the prob-lem is solved, otherwise go to step 5.

Step 5. Construct the new set S(k`1)2

of h-criticalpoints:

S(k`1)2

"S(k)2

X Mh(k)N.

Set k"k#1 and go to the Step 2.

The drawback of Algorithm 2 is that at each iter-ation we must carry out a very laborious procedure ofcalculation of the value F

1. In connection with this we

suggest a different algorithm which has the followingtwo peculiarities:

1. To check the stopping criterion of the iterationprocedure and the extension of the set S(k)

2of h-critical

points in Algorithm 2 we use the same value of F1. In

the new algorithm we shall use the lower estimationFp,(q)1

of the upper estimation Fp1

in a stopping cri-terion of an iteration procedure. It is seen from (25)that if Fp,(q)

1*0 then Fp

1*0. From here an extension

of the set of h-critical points will happen if the upperestimation of the value F

1is greater than zero. Since

Fp1

is the upper estimation of F1

then if Fp1)0 then

F1)0. So Fp

1can be used in a stopping criterion.

2. An extension of the sets of z-critical and h-criticalpoints will happen simultaneously. This requires cal-culation of the value FI p

1:

FI p1"G

Fp1

if Fp1)0,

Fp,(qN )1

if Fp,(qN )1

'0,(47)

where qN is the least number of a cycle in the algorithmof the upper estimation calculation under which thevalue Fp,(q)

1becomes greater than zero:

qN "arg minq

Fp,(q)1

/Fp,(q)1

'0,

So, the algorithm of calculation of Fp1

must interruptcalculation on that cycle when Fp,(q)

1becomes positive

for the first time and continue calculation until an endif all Fp,(q)

1are negative.

Algorithm 3.Step 1. Set k"1, t"0. Choose some set S

1of

approximation points and initial sets of h-criticalpoints S(0)

2and z-critical points Sr

3(r is the number of

points in Sr3), assign initial values of variables d,

zi (i3I1), zi ( j3I(0)

2).

Step 2. Solve problem (42)—(44). Letd(k), zi,(k) ( j3I

1), zj,(k) ( j3I(k)

2) be the solution of the

problem.Step 3. Find the value FI r`t

1. Let h(k) be the solution

of the problem.Step 4. Check the condition:

FI r`t1

)e. (48)

If it is satisfied then the iteration procedure is stop-ped otherwise go to Step 5.

Step 5. Solve the problem:

lk"min

z|Z

maxj|J

tj(d, z, h(k)).

Let z(k) be the solution of the problem.Step 6. Check the condition:

lk)0. (49)

If the condition is satisfied, then go to Step 7,otherwise go to Step 9.

Step 7. Calculate F1

by means of Algorithm 1. LetS*3

be the limiting set of z-critical points and h* be thesolution of problem (5).

Step 8. Check the condition:

F1)e. (50)

If the condition is satisfied, the solution of the prob-lem is solved, otherwise go to Step 9.

Step 9. Check the condition:

D f (k)1!f (k~1)

1D)e (51)

where e is a sufficiently small value. If it is satisfiedthen go to Step 10, otherwise go to Step 12.

1012 G.M. OSTROVSKY et al.

Page 7: Optimization problem of complex system under uncertainty

Step 10. Set

tM"Gt#1 if l

k*0,

DSk3D if l

k(0,

where DS(k)3

D is the number of z-critical points in S(k)3

.Step 11. Construct the new set of z-critical points

for the next iteration:

S(r`t)3

"S(r`t)3

X MDS3N

where

DS3"G

z(k) if lk*0

S*3

if lk(0.

Step 12. Set t"tM .Step 13. Construct the new set of h-critical points:

S(k`1)2

"S(k)2

X Mh(k)N.

Step 14. Set k"k#1 and go to the Step 2.

Now we shall make a number of remarks about thisalgorithm.

1. The value t is equal to the number of pointswhich have been supplemented into the set of z-criti-cal points.

2. Here we explain condition (51). Under the use ofcondition (48) as a criterion for the enlargement of theset of h-critical points S(k)

2(see Steps 4 and 13) a situ-

ation can arise when on two successive iterations thefollowing conditions are satisfied:

S(k`1)2

"S(k)2

, (52)

∀j3I(k`1)2

, &p3J, tp(d(k), zi,(k), hi,(k))'0. (53)

It is easy to see that if relations (52), (53) hold then thevalues d(q), zi,(q) (i3I

1), zj,(q) ( j3I(q)

2) will remain con-

stant but design d(q) will be infeasible. A close situationwill be the case if condition (52) is replaced by thecondition of a sufficient nearness of the sets S(k`1)

2and

S(k)2

. It is clear that in both the cases condition (51) willbe satisfied. Such an enlargement of the set of z-critical points will happen under fulfilment of condi-tions (52), (53) or close to them.

3. Here we explain steps 6—8. At first we shallassume that in the algorithm these steps are absent. Inthis case if FI r`t

1'e and condition (51) is not satisfied

then at step 13 the suspicious point h(k) will be addedin the set S(k)

2. It follows from inequalities

Fr`t1

*FI r`t1

'0, Fr`t1

*F1

that at the point h(k) the upper evaluation of theflexibility function is greater than zero.

Let now steps 6—8 are included in the algorithm. Inthis case if inequality (49) is satisfied then it meansthat at the point h(k) condition (10) is not violated andtherefore one must not add the point h(k) to the set S(k)

2.

In this case it is advisable to complete the calculationof F

1. If condition (50) is satisfied then the solution of

the problem is obtained. Otherwise, the point h(k),

obtained at Step 7, will be added to the set S(k)2

at Step13.

4. Comparative analysis of methods of flexibilityfunction calculation and the two-stage optimizationproblem

By using the developed method for the calculationof the flexibility function we must solve nonlinearprogramming problems (23) and (31). A local max-imum of problem (23) will coincide with the globalmaximum of the problem if the functions t

j(d, z, h) are

quasi-concave in h for fixed z (Bazaraa and Shetty,1979). A local minimum of problem (31) will coincidewith the global minimum of the problem, if the func-tions t

j(d, z, h) are quasi-convex in z for fixed h. So by

using the usual (local) methods of nonlinear program-ming we shall find the global solution if the followingcondition is satisfied.

Condition 3. ¹he functions tj(d, z, h) are quasi-con-

cave in h for fixed z and quasi-convex in z for fixed h.For solving the TSOP one must solve problem

(42)—(44) besides solving problems (23) and (31). A lo-cal minimum of problem (42)—(44) will coincide withthe global minimum of the problem if, in addition toCondition 3, the following condition is satisfied.

Condition 4. ¹he functions tj(d, z, h), f (d, z, h) are

jointly quasi-convex in d and z.

Now we compare the developed approach withHalemane and Grossmann’s method. Halemane andGrossmann’s method is based on the assumption thatthe solution of problem (5) is in one of the vertices of¹. It can be used only in this case. The developedapproach does not use this assumption; therefore, itcan be applied in those cases, when Halemane andGrossmann’s method cannot be applied. Now wecompare the developed approach with the ACSmethod. The ACS method gives the global solution ofproblem (11) if Condition 2 is satisfied. This conditionconstricts the region of application of the method. Forexample, if any function t

j(d, z, h) has the minimum

strictly in an interior of the feasible region Z thenCondition 2 is not satisfied. On the other hand, Con-dition 3 is essentially weaker and therefore the de-veloped method can be applied in a larger number ofcases. Thus if Condition 3 is satisfied and Conditions1 and 2 are not satisfied then the developed approachhas an essential advantage in comparison with theHalemane and Grossmann method and the ACSmethod. Of course, there exist some cases where theHalemane and Grossmann method or the ACSmethod will have advantages. For example, if Condi-tion 1 is satisfied and the dimensionality of the vectorh is small then the Halemane and Grossmann methodwill have an advantage. If Condition 2 is satisfied andthe number of possible active sets is not large then theACS method will have an advantage. Thus the

Computer system under uncertainty 1013

Page 8: Optimization problem of complex system under uncertainty

Table 1

Algorithm 2 3Variant

12 2

96 842

2 296 60

31 18 8

Table 2

Algorithm 2 3Variant

1 17 152 17 123 3 3

Table 3

Variant »I A f

1 6.62 7.45 101402 6.63 7.28 98243 6.63 8.96 10760

developed method must not be opposed to theHalemane and Grossmann method and the ACSmethod. Conversely, our consideration shows that itsupplements the Halemane and Grossmann methodand the ACS method. Besides, for many real prob-lems, it is very difficult to establish whether the func-tions t

j(d, z, h) belong to the class of concave or

convex functions. Therefore, in this case, after solvingthe TSOP by means of any of the considered methodswe cannot guarantee that the obtained solution is glo-bal. In particular, we cannot guarantee the flexibilityof a CP. Therefore it is desirable to have a collectionof methods which are based on different require-ments of concavity or convexity of the functionstj(d, z, h).

5. Computational experiment

Algorithms 2 and 3 were used for the optimizationof the flowsheet consisting of a reactor and a heat-exchanger (Halemane and Grossmann, 1983). The for-mulation of the problem is described in Halemaneand Grossmann (1983), (see Ostrovsky et al. (1994) aswell). Here there are two design variables — the vol-ume »M of the reactor and the heat transfer area A inthe heat-exchanger, three control variables — the reac-tor temperature ¹

1, the reaction volume » and the

outlet temperature ¹w2

of the heat-exchanger. Weconsider three variants which differ by the number ofapproximation points. As approximation points weused in the first variant the first point, in the secondvariant the first two points and in the third variantfive points from the initial set of parameters pointsused by Halemane and Grossmann (1983). The initialset S(0)

3of z-critical points contained one point

(¹1"384, ¹

w2"350, »"10). For solving the non-

linear programming problem we used the successivequadratic programming method (Schittkovsky, 1983).The values of e in Eqs. (48), (50) and (51) are equal to5]10~3. Tables 1 and 2 give computational expendi-tures needed to solve variants 1, 2 and 3 of the optim-ization problem by means of Algorithms 2 and 3. Onthe intersection of rows and columns of Table 1 wegive two numbers. The upper number is the number ofmain iterations of the algorithm (the number of solv-ing problem (42)—(44)) and the lower number is thenumber of solving problem (23). Table 2 gives CPU-times (s) for solving variants 1, 2 and 3. We used anIBM-compatible PC AT with INTEL I486-DX50processor. It is interesting to note that Halemane andGrossmann’s algorithm required 162 s (DEC-20). Op-timal values of the design variables and the criterionfor all the variants are given in Table 3.

The result coincides with the result of Halemaneand Grossmann (1983). But Halemane and Gros-smann solved the problem using an assumption thatthe solution of max—min—max problem lies in one ofthe vertices of ¹. However, they did not prove that thefunctions t

jare jointly convex in z and h. Conse-

quently, there is no guarantee that the obtained solu-

tion is correct. We obtained the same solution of theproblem with the help of our method, which is basedon different requirements to the functions t

j(the

Halemane and Grossmann method requires a convex-ity of the functions t

jin h, but our approach requires

a quasi-concavity of the functions tjin h). So having

solved the problem by both the methods we can speakwith larger certainty about the correctness of theobtained solution.

6. Conclusion

In this paper new algorithms for calculation of theflexibility function and solving the two-step optimiza-tion problem are represented. The algorithm of calcu-lation of the flexibility function uses calculation of theupper and lower bounds of the flexibility function.Calculation of the upper bound is reduced to solvingsome mixed integer nonlinear programming problem.By using a special structure of the problem we re-duced its solving to multiple solving of ordinary prob-lems of nonlinear programming (23). A formulation ofproblem (23) uses some set of z-critical points. Animportant part of the algorithm is an accumulation ofz-critical points in the set S

3for a diminution of the

difference between the upper and lower evaluations.

1014 G.M. OSTROVSKY et al.

Page 9: Optimization problem of complex system under uncertainty

On the basis of the algorithm we considered twoalgorithms for solving the TSOP. At each iteration ofboth the algorithms, a lower estimation of the optimalvalue of the objective function of the TSOP is cal-culated. The method of calculation of the lower es-timation uses the set S(k)

2of h-critical points. In the first

algorithm, a complete calculation of the flexibilityfunction is performed at each iteration and if a designd is infeasible (F

1'0) then the point h(k) is added to

the set S(k)2

. The drawback of the algorithm consists inthat at each iteration the time-consuming procedureof calculation of F

1is performed. In the second algo-

rithm only the value FI r`t1

(the lower estimation of theupper estimation of the flexibility function) is cal-culated in the majority of cases and the set S(k)

2is

increased if FI r`t1

'0. Computational experimentationshowed that the second algorithm can be more effi-cient.

Developed algorithms are based on different re-quirements to the functions t

jthan the Halemane and

Grossmann method and the ACS method. Weshowed that the Halemane and Grossmann method,the ACS method and our methods supplement oneanother.

References

Bazaraa, M.S. and Shetty, C.M. Non-linear programming.¹heory and algorithms, Wiley, New York (1979).

Beveridge, G.S. and Shechter, R.S. Optimization: theory andpractise, McGrow Hill, New York (1970).

Grossmann, I. E. and Sargent, R.W.H. (1978) Optimumdesign of chemical plants with uncertain parameters.A.I.Ch.E. J. 24(6), 1021—1028.

Grossmann, I.E. and Floudas, C.A. (1987) Active constraintstrategy for flexibility analysis in chemical processes. Com-put. Chem. Engng 11(6), 675—693.

Halemane, K.P. and Grossmann, I.E. (1983) Optimal processdesign under uncertainty. A.I.Ch.E. J. 29(3), 425—433.

Ostrovsky, G.M., Volin, Yu. M., & Senyavin, M.M. (1997)An approach to solving two stage optimization problem.Comput. Chem. Engng 21(3), 317.

Pistikopoulos, E.N. and Grossmann, I.E. (1989) Optimalretrofit design for improving process flexibility in nonlin-ear systems. Comput. Chem. Engng 13(9), 1003.

Varvarezos, D.K., Grossmann, I.E. and Biegler (1995) A sen-sitivity based approach for flexibility analysis and designof linear process systems. Comput. Chem. Engng 19(12),1301—1316.

Computer system under uncertainty 1015