sensitivity analysis in stochastic flow networks using the monte carlo method

17

Click here to load reader

Upload: christos-alexopoulos

Post on 15-Jun-2016

227 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

Sensitivity Analysis in Stochastic Flow Networks Using the Monte Carlo Method

Christos Alexopoulos and George S. Fishman

School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332, and Department of Operations Research, University of North Carolina, Chapel Hill, North Carolina 27599

Consider a flow network whose nodes do not restrict flow transmission and arcs have random, discrete, and independent capacities. Lets and t be a pair of selected nodes, let A denote the value of a maximum s--t flow, and let r denote a set of s-1 cuts. Also, let 9 denote a set of independent joint capacity distributions with common state space. For fixed I < u, this paper develops methods for approximating the probability that I s A < u and the probability that a cut in r is minimum given that Is A < u for each distribution in 9. Since these evaluations are NP-hard problems, it shows how information obtained during an iterative procedure for computing the probability that I s A < u can be used for designing an efficient Monte Carlo sampling plan that performs sampling at few capacity distributions and uses sampling data to estimate the probabilities of interest at each distribution in 4. The set of sampling distributions is chosen by solving an uncapacitated facility location problem. The paper also describes techniques for computing confidence intervals and includes an algorithm for implementing the sampling experiment. An example illustrates the effi- ciency of the proposed method. This method is applicable to the computation of performance measures for networks whose elements have discrete random weights (lengths, gains, etc.) for a set of joint weight distributions with common state space. 0 7993 by John Wiley & Sons, lnc.

1. INTRODUCTION

Designing a flow network to serve a specific purpose calls for the enumeration and evaluation of many alter- native network configurations before arriving at the preferred arrangement. If the purpose is to establish a flow between two points s and r that exceeds a specified value, then one minimally needs to compute an s-t flow for each design and to compare it with the corre- sponding s-t flow values for the remaining alternatives and, in particular, with the specified minimum. If the prototypical network has components whose capaci-

This research was supported by the Air Force Office of Scientific Research under grant AFOSR-84-0140. Reproduction in whole or part is permitted for any purpose of the United States Government. Technical Report No. UNC/ORITR-88/5.

ties degrade in random fashion, a single flow value per alternative does not suffice for making an informed decision. In this case, the probability that a feasible s-t flow exists serves as a more relevant decision variable.

An example illustrates the importance of the prob- lems addressed in this paper. Consider a power supply network whose transmission lines are numbered 1, . . . , e and have random independent capacities. As- sume that the capacity of line i takes on values b,2 and bil with probabilities qi and 1 - qi , respectively, where 0 I bi, < Let q = (q,, . . . , qe) and let A denote the value of maximum s-r flow. Suppose that flow of value at least u has to be supplied from s to t , where flow cannot be observed and only an alarm indicates that A < u. Then, the system reliability g(u,m,q) = Pr(A 2 u) gives the probability that sufficient flow ex- ists from s to t . In addition, assume that one can mea-

NETWORKS, VOl. 23 (1993) 605-621 0 1993 by John Wiley 8 Sons, Inc. CCC 0028-3045/93/070605-17

605

Page 2: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

606 ALEXOPOULOS AND FISHMAN

sure s-t flows at a specific node j # t where each flow has value less than or equal to its value at t. Suppose that the system fails and the value of the maximum flow at i equals 1. To repair the system, one must locate the minimum cut(s) and upgrade the capacities of some of their lines. Knowing cuts % with large conditional probabilities h(%,l,u,q) of being minimum when 1 5 A < u provides valuable information about potential bottlenecks restricting flow in cases of system failure.

Suppose it is desirable to build the system with lines whose reliabilities 9; make g(u,=,q) large and with a cut % consisting of easily repairable lines a highly probable bottleneck when 1 I A < u. The ability to compute g(u,=,q) and h(%,l,u,q) for a set of reliability vectors q by using information obtained during the computation for a specific vector p makes a significant contribution to the solution of the latter design problem.

Since the exact evaluation of g(u,=,q) and h(%,l,u,q) belongs to the class of NP-hard problems [4, 51, no polynomial algorithm is known to exist for computing them. This property limits one's ability to evaluate an alternative for a moderate-to-large network, and the need to perform this evaluation for many alternative designs merely compounds the seriousness of the prob- lem. Algorithms dealing with the exact computation of these probabilities include Bukowski [ 6 ] , Doulliez and Jamoulle 181, and Evans [91, but all encounter the com- putational intractability of g( l ,x ,q ) , making their use limited as the size of the network grows. However, a method based on Monte Carlo sampling does exist [ 121 for estimating the feasible flow probability g(u,m,q) for many reliability vectors q simultaneously. The tech- nique draws samples according to a single vector p and then converts the resulting data into suitable form to provide estimates of critical quantities of interest for each reliability vector under consideration. Collec- tively, we refer to this methodology as a sensitivity analysis.

This paper has three objectives: First, for fixed 1 < N, it proposes an iterative approach for constructing a decreasing sequence of upper bounds on the probabil- ity that 1 5 A < u for the general case of multistate capacities with joint probability mass function (p.m.f.). Second, it shows how the information at each stage of this approach can be used for efficiently estimating the latter probability and the probability that a cut % is minimum when 1 I A < u for a set of p.m.f.'s with common-state space. Third, it gives an account of how a sensitivity analysis works in practice. The proposed methodology is applicable to the computation of perfor- mance measures for networks whose elements have discrete random weights (lengths, gains, etc.) for a set of joint weight distributions with common state space. We begin with several formalisms that set the stage for analysis.

Let G = (?/^,(e,s,t) denote the flow network, where ?/^ denotes the set of nodes, 8 = ( 1 , . . . , e} denotes the set of arcs, s is the source node, and t is the sink node. Nodes have infinite capacities, but every arc i E '& has a random, discrete (nonnegative) capacity Bi taking values in the set {b;, < . . . < b,, < =} with probabilities q, , , . . . , q,,,, respectively. Let R denote the state space of the random vector B = (Bl, . . . , Be,). A state x of R can be defined as an e-tuple x = ( b , u l , . , . , beup), where ui takes values in the set { I , . . . , n;}. To simplify the notation, the index u, will be used to denote b,, itself so that a state point x will also be denoted as v = ( u l , . . . , ue). Assume that the capacities are statistically independent. Then, the joint p.m.f. of B is given by

q(v) = Pr(Bl = b lu l , . . . ,Be = b(,,,)

= n q,, v E a. i = I

Hereafter, the independent p.m.f.'s with q(v) > 0 for all v E R are called points.

For any v E R, let A(v) denote the value of a maxi- mum s-t flow in G when the capacities are v. The random variable A = A(B) will denote the maximum flow value as a function of the capacities B. An s-t cut % - is a partition ( W , w ) of ?/^ such that s E W and t E W. Hereafter, the cut % will be denoted by the set of arcs {(ij): i E W, j E w}. Also, the terms maximum flow and maximum s-t flow will be used interchange- ably and the word cut will denote an s-t cut. For v € R, the capacity of the cut %, denoted by Z(%,v), is defined as CiEu; biU,. The cut (e is minimum if Z(%,v) = N u ) . Let T*(v) denote the set of all minimum cuts when arc capacities are v.

Consider a set r of cuts and fix 1 < u. For all points q in a set 9, this paper studies the computation of

g ( q ) = g ( l , u , q ) = P r ( l 5 A < u;q) ,

m . 4 ) = f(r,l,u,q)

= Pr(r n T*(B) # 0 and Is A < u;q)

= probability that a cut in r is minimum

and1 5 A < u

and, in particular, (2)

h(T,l,u,q) = Pr(T n T*(B) # 011 s A < u;q)

conditional probability that a cut in r is minimum when 1s A < 14

pr.q)ig(q) ifg(q) > 0 otherwise.

Page 3: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

SENSITIVITY ANALYSIS IN STOCHASTIC FLOW NETWORKS 607

When r is a singleton set, we write % for r = {%}. The proposed approach can be readily applied to the computation of the conditional probability that arc i belongs to a minimum cut given that 1s A < u for all i E % and 9 E 9.

For u E R, define

where I ( . ) is the indicator function. Then, one can write

The degree of difficulty associated with the computa- tion of these probabilities at a single point, even for networks of moderate size, has motivated the search for approximation methods. One approach relies on bounds, as in Frank and Frisch [14]. Another uses Monte Carlo sampling. Alexopoulos and Fishman [4], for the present network setting, described an efficient plan that uses importance sampling based on an upper bound on g ( 9 ) to estimate the probabilities in (2) at a single point 9 that improves considerably on the accu- racy of the estimates it produces over a crude Monte Carlo sampling plan.

The present paper makes the following four contri- butions: (1) The bound for g ( 9 ) in [4] is based on infor- mation from arc-disjoint cuts and s-t paths. Since this bound is computed via convolutions, its time require- ments depend on the sizes of the capacity levels b, and its evaluation must be repeated for each 9 E 9. On the contrary, the proposed sequence of bounds is gen- erated by a sequential decomposition of the state space fl that is independent of the probabilities sjl. The proce- dure borrows ideas from the method in Doulliez and Jamoulle [8], proposed for the evaluation of the proba- bility g(d,m,9) for fixed d 2 0, and leads to arbitrarily tight bounds, at the cost of computing effort. (2) The sampling plan in [4] is quite complex, whereas the pro- posed sampling plan is very simple and requires no additional mean time per replication over a crude Monte Carlo sampling plan. (3) The method in [I21 is extended to the multistate network setting and the problem of ratio estimation. (4) It studies the selection of multiple sampling points when the set 9 is large. This issue is resolved by solving a facility location problem. Overall, the paper shows that the proposed method produces estimates that are more accurate than are the estimates the sampling plan in [4] produces at these points for the same amount of work.

Section 2 discusses briefly the estimation of g ( 9 ) ,

f(T.9). and h(T,q) at a point 9 using crude Monte Carlo sampling, develops the method for evaluating g ( 9 ) , and describes an importance sampling plan for estimating these probabilities. Section 3 extends the latter sam- pling plan to the estimation of the functions { g ( 9 ) , 9 E 9}, {f(T,q), 9 E 9}, and {h(T,9), 9 E %}and Section 4 describes measures for judging the performance of the proposed sampling plan in Section 3 vs. the sam- pling plans in Section 2. Section 5 proposes a procedure for choosing sampling points and Section 6 describes individual and simultaneous confidence intervals. Sec- tion 7 provides an example.

2. ESTIMATION AT A POINT

The simplest simulation technique consists of indepen- dent replications with samples drawn directly from the underlying capacity distribution and is called crude Monte Carlo. Let B"), . . . , B'K) denote K independent samples from the p.m.f. 9(v). Then,

and

are unbiased estimates of g ( 9 ) and f ( r ,9) with vari- ances

and

As a crude estimate of h(T,9) , one has

On each replication, sampling from 9 takes O(l%l) time on average using the cutpoint method in Fishrnan and Moore [13], computing a maximum flow takes O(ISr13) time using the algorithm in Papadimitriou and Steiglitz ([19], pp. 202-21 l)], and determining $(v,r) takes O(&,rlVI) time. For planar networks with s and t in the exterior face, the algorithm in Itai and Shiloach [I61 computes a maximum flow in O(1%110g21Sr)) time. See Goldfarb and Grigoriadis [IS] for a review of net- work flow algorithms.

Page 4: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

608 ALEXOPOULOS AND FISHMAN

2.1. Sampling with Bounds

Consider a superset U of the event {v: I 5 A(v) < u} with known probability Pr(U). Writing gu(q) = Pr(U), one hasf(T.4) 5 g(q) 5 g,(9). Now suppose that one draws independent samples B"), . . . , B(O from the conditional p.m.f. :

Then.

and

are unbiased estimates of g ( q ) and f ( r ,q) , respec- tively, with

An estimate of h(T,q) is

with expectation

and (12)

where o(x) is a function of x such that o(x)/x + 0 as x + 0. The variance reduction ratios

and

demonstrate the effects of an upper bound on reducing the variances. These ratios multiplied by T,.(q)/T(q), where T,(9) and T(9) denote the mean times required per replication by the crude and the importance Sam- pling plans, respectively, give variance reduction time ratios; each represents the amount of time that the crude Monte Carlo experiment requires to produce an estimate with the same variance as the proposed exper- iment produces if it were allowed to run for one time unit.

2.2. An Upper Bound

Reference 141 describes an upper bound on g(q) based on flows through arc-disjoint paths and arc-disjoint cuts. The method proposed here takes advantage of the Doulliez-Jamoulle (D-J) state space decomposi- tion method in [8]. This method is based on iteration and was proposed for the evaluation of the feasible flow probability g(d,m,q) that A L d . While the time to compute the bound g&) is O(y17r)' + Mn181), where y and M are integers 2 1 and n = maxi n, , no special tables are needed for the sampling distribution and the cost of sampling capacities is no greater than in the case of crude Monte Carlo sampling. These properties make the proposed approach more appealing than the alternative method in [4] where the time required for computing the distribution Q(v,q) depends on the ca- pacity levels bik and the time to draw a sample from Q(v,q) is about twice as great as the sampling time from q(v).

For fixed demand d , a state v can be classified as either an operating state [A(v) 2 d ] or a failed state [A(v) < d ] . A set S C 0 is called operating or failed if all states in S are operating or failed, respec- tively.

In short, the D-J method works as follows: At each iteration it determines an operating set, failed sets, and (mutually) disjoint undetermined sets. An undeter- mined set is one whose states cannot be classified at that iteration as operating or failed. These undeter- mined sets are used as input to the next iteration to determine additional operating and failed sets and, again, any remaining undetermined sets are used in the next iteration. The procedure ends with a total decomposition into operating and failed sets and an exact evaluation of g(d,m,q). The operating and unde- termined sets that are produced in each iteration are discrete rectangles (or lattices) in the sense that, for every such set S, there is a lower limiting point 4 S I = ( a I I S l , . . . , CYJSI) and an upper limiting point p [ S ] = (PJS], . . . , p , [ S ] ) such that each integer v with a [ S ] 5 v 5 p [ S ] belongs to S . This set can then be denoted by S = {(al[Sl$~[SI). . . . , ( a , [ S l , P , [ S l ) }

Page 5: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

SENSITIVITY ANALYSIS IN STOCHASTIC FLOW NETWORKS 609

and its probability can be computed by

p PjlSl

An undetermined rectangle S with lower and upper limits (Y and @ is decomposed as follows: Create a fictitious demand node t ’ , add the arc e’ = ( r , t ’ ) num- bered with capacity d, and determine a maximum s-t’ flow f = (f,, . . . ,f,,f,.) with capacities p fo r the arcs in 8. If the value of this flow A’(@) is less than d, none of the states in S can satisfy the demand d at node t and then S is a failed subset of R. Otherwise (A’(@) = d), for each arc i define

uo = min{z: ai 5 z I pi and biz 25). (15a)

Obviously, the set

W = {v: v E S and up I ui 5 pi V i} (15b)

is an operating rectangle. Now using the flow f, we compute a maximum s-t flow fo with value A(p). For each arc k with v! = aL, we set u$ = aL. Then, for each arc i with up > at, we compute a maximum s-1 flow with the capacity of i equal to zero. We denote the value of this flow by A(O,,@). If N O , # ) = A(@), we set u: = a,. Otherwise, i belongs to a minimum cut when its capacity is 5 b,,, - Ab,, where Ab, = b,,, - A(p) + A(O,,p) 2 0. In this case, we define v: as the smallest index z in {at,al + I , . . . , up} for which 6,; 5 A(@) - A(O,,p) and A(O,,p) + b,, 2 d. Then, the sets

are failed subsets of R and the rectangles

are disjoint, undetermined subsets of R. Furthermore W, Li , and U;= Fi partition S. Note that the computa- tion of u ) has been modified to correct an apparent error in [81.

To compute g ( 9 ) = g ( f , u , q ) , one can apply the D-J method twice, the first time with d = I and the second with d = u, and use the equation g ( 9 ) = g( l ,m,9) - g(u,m,9) . However, the evaluation of g ( 9 ) can be ac- complished with potential time savings if one observes that every failed state when d = 1 is failed when d > 1. Suppose then that the D-J method was applied for evaluating g ( f , w , 9 ) and resulted in the operating rectan- gles W,, . . . , W,. Consider Wj = {(al ,pl) , . . . , (a&)}

for some 1 5 j I J and set the capacity of arc e‘ = ( t , t ‘ ) to u. If A’(p) < u, then all the states v E Wj satisfy 1 5 A(v) < u and Pr(Wj) is part of g ( q ) . If A’(@) = u, then a D-J iteration produces a point vo with A’(vo) = u and an operating rectangle W (A’(u) = u V v E W) such that Pr( W) is not part of g ( 9 ) . In addition, the rect- angles

are disjoints subsets of R and Wj is partitioned into W and Li. Note that the contribution of q(v) for v E Li to g ( 9 ) cannot be determined without further decompos- ing Li.

The following steps use the ideas above to describe a procedure for evaluating g ( 9 ) . The set W contains operating rectangles when d = 1 and the set % contains rectangles to be decomposed in subsequent iterations. Steps 2-5 compute g(1,m.q) and steps 7-1 1 decompose the operating rectangles that were obtained with d + I to eliminate states u with A(v) 2 u.

Procedure decomposition

1. Start with the rectangle S = R, a = ( I , . . . , 1) and @ = (nl, . . . , ne). Set W = 0, % = 0, d = I , and g ( 9 ) = 0.

2. If A’(@) = d, decompose S into an operating rect- angle W given by (lSb), failed sets Fi given by (16a), and undetermined rectangles Li given by (16b).

3. Set g(q) = g ( 9 ) + Pr(W) and W = W U {W}. 4. For i = 1, . . . , e: set % = % U {Li}. 5 . If % f 0, pick a set S = {(al,&), . . . , ( a e $ e ) }

6. Set d = u. 7. If W = 0, go to step 12. Otherwise, pick a set

S = {(a,,&), . . . , ( ~ ~ $ 3 ~ ) ) from W and set W = w - {S}.

8. If A’(p) < d, go to step 7. Otherwise, decompose S into an operating rectangle W and rectangles Li given by (17).

from %, set % = % - {S}, and go to step 2.

9. Set g ( q ) = g ( 9 ) - Pr(W). 10. For i = 1, . . . , e: set W = W U {L;}. 11. Go to step 7. 12. End with g ( 9 ) .

It should be noted here that each pass through step 2 requires at most 1% I + 1 maximum-flow evaluations. Therefore, the above procedure can compute the entire function { g ( 9 ) , 9 E %} in time O(Al”lr13(81 + Anl%ll%I), where A is the total number of operating rectangles

Page 6: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

61 0 ALEXOPOULOS AND FISHMAN

obtained in steps 2 and 8. Since A can be very large and the evaluation off(r,q) requires further decompo- sition of the aforementioned rectangles, any decompo- sition approach for computing this probability can be time-consuming. Furthermore, the space required for storing the rectangles in the set "11 for use in a Monte Carlo experiment for estimating f(T,q) can be prohibi- tively large.

Fortunately, after every iteration, procedure DE- COMPOSITION leads to an upper bound on g(q) as follows: Suppose that at most y, rectangles are decom- posed in step 2 with d = I and at most yz rectangles are decomposed in step 8 with d = u. If the bound is used for estimating f(r,q), then the failed rectangles in step 8 (A'(@) < u) must also be added to the set "11. This is necessary because the minimum cuts corre- sponding to the states of such rectangles have not been determined. Assume that at the end the rectangles in W U Q are numbered as U , , . . . , U , and observe that these sets do not contain only the failed states v obtained in step 2 (A(v) < I ) and the operating states v obtained in step 8. Letting

M

u = u urn, m = 1

one has

Note that the sets U , , . . . , U, and the bound g,(q) can be computed in O(y)7r13 + Mnl%eJ) time, where Y = YI + Yz.

We now discuss methods for maintaining the sets 'W and 'JU and stopping criteria for procedure DECOM- POSITION. If this procedure is executed in its entirety, then both sets can be maintained as singly linked lists where operations are performed at the top (or bottom). Experience has shown that this approach succeeds in keeping the number of rectangles in either list small. Otherwise, these sets can be implemented as heaps with nodes corresponding to rectangles. If the objective is a quick determination of an upper bound g,(q) and the network is reliable (9;, is increasing with I for each i), then using node weights equal to the negatives of the rectangle probabilities and removing the root of a heap seems to work well in practice. The procedure can stop when either the bound is less than a predetermined value or M exceeds a given number. See Alexopoulos 131 for an illustration of these strategies on related problems.

2.3. Sampling

Let

and

i = I

The form of the sampling distribution

allows one to draw a sample in O(lCe1) mean time, when tables are precomputed, by using the cutpoint method in Fishman and Moore [I31 as follows:

(a) Sample index m from { 1, . . . , M} with probabilities

(b) For i = 1, . . . , e: sample capacity level ui from br,n/g,(q), m = I , . f f 9 M ) .

{9j:lHirn, ai[Urnl 5 z 5 Pi[Urnl).

Note that the mean time required for sampling is not larger than that with crude Monte Carlo and compares favorably with the mean time for sampling with the importance sampling plan in [4].

3. ESTIMATION AT A SET OF POINTS

Since the sampling plan of Section 2.3 provides esti- mates at the single point q, applying it to estimate the

q E 9) requires, in principle, (91 experiments. We now describe an approach that allows us to estimate all these functions with substantially less effort.

functions { g ( q ) , 9 E 91, Cf(r,cl), 4 E 9), and { h V , 9 ) ,

Let p be a point, not necessarily in 9. Define

The following lemma generalizes Lemma 1 in [ 121 for the case of multistate capacities and is essential to the estimation of g, and h using samples from the distribution Q(v,p). The Appendix contains the proof.

Lemma 1. Suppose that B has the distribution Q(v,p) in (8) with q replaced by p. Then,

Page 7: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

SENSITIVITY ANALYSIS IN STOCHASTIC FLOW NETWORKS 61 1

where

and the p.m.f. q* is defined by

and (28)

and (29)

Observe that the importance function R(B,q,p) makes 8, and Ob unbiased estimates of g ( q ) and q, and qh unbiased estimates of f(L9). However, the vari- ances of these estimates appear to be more complicated than the variances of the estimates g1(9) = gu(q)4(B")) and f,(r,q) = gu(q)4(B(l)JI(B(I),r) that one would ob- tain if a single sample B"' was drawn from the distribu- tion Q(v,q). Indeed,

and

indicating that var Bj(B,9,p) < var g,(9) for some i = a,b provided that the corresponding expression in braces is negative.

Now let B"), . . . , B(m be K independent samples from Q(v,p). Then,

and (30)

Page 8: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

612 ALEXOPOULOS AND FISHMAN

are pairs of unbiased estimates of g ( q ) and f(r,q), re- spectively, with variances

Estimating the function {h(T,q) , q E 9) is a more complex problem. Observe that, if the estimate h&) given in ( I I ) is positive, then the estimate h K ( r , q ) is the ratio of a binomial random variable to a truncated binomial random variable and then its expectation and variance have the relatively simple forms in (12). Since Bi(B,q,p) and qi(B,T,q,p) for i = a,b are not Bernoulli random variables when q # p , these results do not apply. Combining the estimates in (30), one has four potential estimates for h(r,q) for each q E 9, namely,

(32) for i, j = u,b. The next theorem states results related to the expectations and the variances of these ratio esti- mates.

Theorem 2. Suppose that /z (T,q) > 0. F o r j = a,b, define the event A$) = {Ig,,(q,p) - g(9)I 5 g ( q ) } . Then,

where

and

Equations (33) are from ([lo], pp. 55-59). The proofs of (34) follow by taking expectations. Since g(q) and u(ej,q,p) are finite, Chebyshev’s inequality gives Pr[Ai)] 2 1 - u(ej , q , ~ ) I [ g ( q ) ~ K l so that limK-x Pr[A# = I f o r j = a,b and then the first two moments of h i j K ( T , q , p ) are virtually unconditional for K suffi- ciently large. Hereafter, these moments will be denoted unconditionally.

Procedure SENSITIVITY in the Appendix de- scribes the accumulation of data while sampling from Q ( u , p ) and the computation of the estimates gjK(q,p), x K ( r , q , P ) , and hij&q,p) for ij = u,b as well as esti- mates of their variances. For convenience, this proce- dure along with Section 4 assume that a single sampling point is used. The case of multiple sampling points is discussed in Section 5 . Note that estimates for h(V,q,p) , V E r can be derived very easily.

Little can be said a priori about the dominance of one class of estimates for a single function over an- other. For example, consider the function g. The pres- ence of the unknown probabilities g ( q ) and g ( q * ) in the expressions for var g,,(q,p) and var gbK(q,p) limits the determination of the smallest of these variances. Plots of v[g,K(q,p)] vs. q on the same graph serve for visual comparison. Another criterion relies on the average performance of these variance estimates. According to this, the classj with smallest ZqE9 V[gjK(q,p)] prevails. Similar analyses can be performed for the estimates of f and h .

Measuring the effects that the newly proposed plan has in reducing the variances of the estimates g j K ( q , p ) , (74)

u(q, ,6, ,q . p ) = cov[q,(B ,r.q 8,(B ,q,p)l

Page 9: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

SENSITIVITY ANALYSIS IN STOCHASTIC FLOW NETWORKS 613

Hereafter, every experiment using a sampling plan for Section 2 will be called a single-point experiment (SPE). Similarly, every experiment using the sampling plan in Section 3 will be called a multiple-point experi- ment (MPE). For q E 4, let T(q) denote the mean time an SPE requires per replication are drawn from Q(v,q). If the method for sampling,

T ( 9 ) 5 6,(q) + 6,(9)(1 + 1q + ~, (q) lV3

when samples in [I31 is used

where ai(q) are machine-dependent constants. Also, let T ( 9 , p ) denote the mean time an MPE requires per replication when samples are drawn from Q(v,p). Once the sets Xi( z ,q ,p ) in procedure SENSITIVITY are computed,

where Ai(q,p) are machine-dependent constants. Let {p(q), q E 9} denote a function to be estimated.

For each q , an MPE produces several estimates of p(9) with incidental additional cost to the cost of sampling and computing 4(v) , Jl(v,T), and R(v ,q ,p) . Tojudge the performance of the MPE, one can then concentrate on the estimate &(q,p) of p(q) with smallest variance. The quantity

where &(q) is the estimator produced by the SPE with sampling distribution Q(v,q), denotes the number of SPEs that the MPE replaces to produce estimates of p(q) with the same variances on the average when all experiments run for the same time.

An alternative approach for judging the performance of the new sampling plan compares it with (41 crude SPEs. Let &(q) denote the estimate produced by the crude SPE with sampling distribution q(v> and let TJq) denote the mean time this experiment requires per rep- lication. The quantity

denotes the number of crude SPEs the MPE replaces to produce estimates of p(q) with the same variances on the average when all experiments run for the same time.

5. CHOOSING SAMPLING POINTS

The forms of the variances in Theorem 1 , (31). and (33) clearly indicate that the choice of the sampling point p affects the efficiency of the estimates g j K ( q , p ) , &(I',q,p), and huK(T,q,p). Alexopoulos [I] described three alternative procedures for choosing p based on a priori computed upper bounds on var Bj(B,q,p), the coefficients of variation [var 6j(B,q,p)l'nl[l - &)I, and lower bounds on the variance reduction ratios var h,(T,q)lvar haoK(T,q,p). These procedures assume that sampling from a single point suffices to estimate the measures of interest for each 4 E 9. However, choos- ing two or more sampling points can be an attractive alternative when either 9 contains many points or there are many arcs with large maxk[maxqE9 qik - min,,, qik]. The proposed approach differs from the proce- dures in [ l ] in that it uses the outputs of pilot MPEs to select sampling points that optimize performance measures of the estimates of g, J and h.

For practical purposes, we focus on a finite set Y of candidate sampling points such that 9 C Y. For the remainder of this section, assume that the points in Y are numbered as p I , . . . , p I and the points in 9 are numbered as q , , . . . , qJ. Let

1 if p i is used as a sampling point

0 otherwise

1 if g(qj) is estimated by gaK(qj,pj)

or gbK ( q j ,pi)

0 otherwise

(37)

mean time for computing R(B,qj,pi), once the sets Xk(z,qj ,pi) in procedure SENSITIVITY are known when qj # p i 0 when qj = p i .

Note that xi 5 yi for all i and j . For given y i and xu, the quantity

Page 10: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

614 ALEXOPOULOS AND FISHMAN

estimates the number of MPEs that replace J SPEs to estimate g(qj) with the same overall accuracy when all experiments run for the same time.

Let

Then, a practical choice of sampling points is given by an optimal solution to the uncapacitated facility location problem (UFLP):

I I J

subject to

I c x i j = I i = I

xu I yi V i a n d j (38b)

y , E {O,I} , xii E {O,I} V i , j .

The unknown coefficients cy can be estimated as fol- lows: For every point p , E Y, run a sensitivity experi- ment which draws KO samples from Q(v,p,) and pro- duces estimates f @ , ) and T(%,p,) of the times T ( p , ) and T@,p , ) , respectively, and the variances in the ex- pression for cy f o r j = 1 , . . . , J . The times T(q,,p,) are estimated by [ f ( 9 , p , ) - f ( p , ) ] / J . All experiments have the same initial conditions (seeds) and the sample size is chosen so that the total number of samples IK , is relatively small compared to K . A reasonable choice for KO is KO = 0.lKII.

UFLP is an NP-hard problem (see Cornuejols et al. [ 7 ] , p. 1271). However, it can be solved within a reasonable time frame using a primal-dual procedure embedded in a branch-and-bound algorithm (see Kor- kel [17 ] ) . Such algorithms can also result quickly in a good approximate solution. For large J, one then has two practical alternatives: to consider a smaller set of candidate sampling points or use an approximate solution. Recall that one solves a single integer linear program to estimate a large number of measures g(qJ) and f(r,qJ) efficiently for j = 1, . . . , J whose evalua- tions are hard counting problems.

Similar analyses can be conducted for choosing sam- pling points that optimize the performance of the esti- mates of f(r.qJ) and h(T,qJ). If the objective is the estimation of h(%,q,) for all cuts (e E I', performing such an analysis for each (e should be avoided.

Note that the choice of sampling points that mini- mizes the objective

where O,,(B,qj ,p , ) are defined in Theorem 1, optimizes the variance reduction measure 6 in (35) for p = g when it applies to the case of multiple sampling points. The complicated quadratic form of this object is the reason for adopting the simpler objective in (38a).

6. CONFIDENCE INTERVALS

This section applies results in [ 1 I ] to describe confi- dence intervals for g ( 9 ) andf(T,q) and results in [ 2 ] to describe confidence intervals for h(T,q). The resulting intervals hold for every sample size K. For the remain- der of the section, p denotes a sampling point and q denotes a fixed point in 9. Define the function

w(y,z) = zlog(y/z) + ( I - z)log[(l - Y M l - z)l

(39) o < y , z < 1 .

The next two propositions describe confidence inter- vals for g ( q ) .

Proposition 1. Let R, = max{R(v,q,p): v E U and 4(v) = 1) and YaK = 2aK(q ,~ ) / [~u(p )R , l . Let

and

where 0 < y I < YclK < y 2 < 1 are the solutions to w(y, YaK) = log(a/2)/K. Then, the interval ( g l ( q , a / 2 ) , g 2 ( q , a / 2 ) ) covers g(q) with probability greater than 1 - a. 8

and (41)

Page 11: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

SENSITIVITY ANALYSIS IN STOCHASTIC FLOW NETWORKS 615

where 0 < y,< YbK < y4 < 1 are the solutions to w b , YbK) = log(a/Z)/K. Then, the interval (g3(q),g4(q)) covers g ( q ) with probability greater than 1 - a.

The proofs of these propositions follow from Theorem 1 in [Ill and from the facts that Pr[O 5 YJK 5 11 = 1 f o r j = a,b.

One can now compute confidence intervals for f(r,q) by replacing g jK(q ,p ) with&(r,q,p) f o r j = ah , R, by RA = max{R(v,q,p): v E U and &(v)$(vJ) = 1) and Rb by RL = max{R(v,q,p): v E I/ and &(v)$ (v,r) = 01.

Proposition 3 applies Theorem 3 in [2] and uses sam- ples from Q(v,p) to describe a confidence interval for h(r ,q) . This reference describes an algorithm for com- puting the interval.

Proposition 3. Define the function

f o r 0 < y I 1, h 2 0, 0 < r < 1 , and E > Oand let F(y,h’,r,&) = min F(y,h,r,&). Write

hrO

where B(”, . . . , B(m are independent samples from Q(v,p). Let g0(q,a/2) = min{g2(qd2), g4(q,a/2)), where g,(q,a/2) and g4(q,a/2) are defined in (40) and (41). Let r, be the solution to

~(g,(q,a/2)/g,(q),h’,r,(S, - rS,)IK) = (a/4)”K;

0 < r < S,/S2

for the case in which S, > 0 and let r2 be the solution to

for the case in which S, < S,. Then, the random vari- ables

and (43)

The computations of the quantities R,, RA, Rb, and RL are important issues. Alexopoulos [ 11 proved that these problems are NP-hard when q # p and suggested mixed integer programs for solving them. The decom- position method proposed in Section 2.2 yields the easily computed upper bound

Replacement of Ri, R,!, i = a,b by R, will result in wider confidence intervals, but the time 0(191Z;= , log,n,) to compute this bound, once the rectangles U , , . . . , U , are computed, is polynomial.

Although each of the confidence intervals in this section covers the corresponding measure with proba- bility at least 1 - LY, the joint confidence intervals for

q E 9) cover them only with probability at least 1 - I9G(a. This result follows from a Bonferroni inequality (see Miller [18], p. 8). To obtain ajoint confidence level > 1 - a, one can replace the individual confidence level a by a/lYI. However, this replacement results in an increase of the widths of the resulting confidence intervals. Using asymptotic properties in [2], one can show that the approximate width of the confidence interval (gl(q,a/2),g2(q,a/2)) for g ( q ) is multiplied by {log[2(191 - I)/LY]}~’*, which for a = .01 and 191 = 20 equals 2.87, for LY = .01 and 191 = 100 equals 3.15, and for a = . I and 19) = 1000 equals 3.49.

the functions 9 E W, {f(r.q), 4 E 31, and { h ( L q ) ,

7. AN EXAMPLE

We now illustrate the practical case in the introductory section with a numerical example. Consider the flow network in Figure 1 with two-state capacities, b,, = 0 for all i E 8, s = 1, and t = 10. The notation i lk on an arc indicates arc i with upper capacity level b,2 = k. Let qi = Pr(B, = 6,) denote the reliability of arc i for i = 1 , . . . , 25. Note that the maximum flow value with all arc capacities at their upper levels is 52. We consider this network for the following two reasons: ( 1 ) We illustrate the efficiency of the SPE in Section 2 by comparing it with the experiment in [4]. (2) The

Page 12: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

616 ALEXOPOULOS AND FISHMAN

Fig. 1.

examples in [31 demonstrate the efficiency of state- space decomposition methods in networks with multistate capacities. We then focus on networks with two-state capacities where the issue of sensitivity anal- ysis is conceptually easy.

Alexopoulos [ I ] studied the behavior of g , f, and h in response to varying common arc reliability qi E { S O + . 1 Oj, j = 0, I , . . . ,4} for several maximum flow intervals and cuts. The results from the SPEs are also listed in [4]. Upper bounds g, were computed using properties of flows through arc-disjoint paths and ca- pacities of arc-disjoint cuts. For each interval and each arc reliability, K = 216 = 65,536 samples were drawn from a conditional distribution.

The results showed that when 43 5 A < 47 ( I = 43, u = 47) and r = .70, from the cuts under consideration either cut V, = {5,6,9,10,12} or cut V, = {1,8,9,10,12} is minimum with hK(V,,.70) = S243 and h,(%2,.70) = .1938. Suppose that it is required to find a reliability vector q for which h(V2,q) < .I0 and h(Vl,q) > h(V2,.70). We attempt to solve this problem by varying the common reliability q for arcs 1, 8, and 12 within the set 9 = { S O + .02j,j = 0, . . . ,20} while leaving the reliabilities of the remaining arcs fixed at .70. This setting allows us to replace the vector q by q.

Table I lists the upper bounds g&) in column 2 and the estimates g,(q) in column 3 resulting from the 21 single-point experiments. The bounds were computed as in (19) where the rectangles Urn resulted from the procedure in Section 2.2 with three rounds of iterations through steps 2-5 and three rounds of iterations through steps 7-11. In each round, all the existing rectangles in Q and W were decomposed. The proce- dure was designed to stop when either all upper bounds g, (q) I .05 or the total number of rectangles in Q and W exceeded 30. These iterations resulted in 33 disjoint rectangles and took less than 1 s. The programs were written in FORTRAN 77 and the experiments were run on a SUN 3861' 250 workstation. For this special case

of two-state capacities, we can store each rectangle U,, by recording only the set of "on" arcs UA = {i E %: ai[Urn] = 2) and the set of "off" arcs U,; = {i E %: pi[Urn] = I}. This arrangement reduces the space required for storing the rectangles. Also, note that only the capacities of those arcs not in the sets U; and U , need to be sampled, thus reducing the computation time for each experiment.

The decomposition method proposed in Section 2.2 resulted in tighter bounds than did the approach in [4] and the sampling plan in Section 2.3 produced esti- mates of g ( q ) and h(V,q) with considerably smaller variances. On average, the bounds were tighter by a factor of 3.27, whereas the estimates of var g K ( q ) and var h,(%,q) were smaller by factors of 16.43 and 3.4, re- spectively.

Column 5 lists estimates of the variance ratios var g,(q)/var ,ijK(9), where V[g, (q) ] were computed from crude Monte Carlo experiments. Since the network is planar, the algorithm in [ 161 was used for the computa- tion of a maximum flow in each replication. On average, the experiment in Section 2.3 took 369.9 s, whereas the crude Monte Carlo experiment took 425.6 s, indi- cating the time savings that the former sampling plan

-

TABLE I. Estimates. of g(9) for varying reliability 9 of arcs 1, 8, and 12; reliability of each remaining arc = .70 and K = 65,536; Tc(9)lT(q) = 1.15

4

.50

.52

.54

.56

.58

.60

.62

.64

.66

.68

.70

.72

.74

.76

.78

.80

.82

.84

.86

.88

.90

,0144 .0154 .0164 .0174 .O 185 .0195 .02M .0217 .0228 ,0239 .0250 .0261 .0272 .0283 .0294 .0305 .03 16 .0327 .0338 .0349 .0359

.0124

.0133

.O 142

.0150

.0159

.0168

.0177

.0187

.01% ,0205 .0214 .0224 .0233 .0242 .025 1 .0260 .0269 .0277 .0287 .02% .0304

.372D-9

.426D-9

.482D-9

.549D-9

.619D-9

.695D-9

.778D-9

.860D-9

.948D-9

.105D-8

. I 16D-8

.126D-8 , I39D-8 . I52D-8 .163D-8 .178D-8 .193D-8 .210D-8 .224D-8 .238D-8 .256D-8

486.56 453.05 429.46 395.26 366.72 345.32 320.05 302.33 292.19 274.29 258.62 249.21 235.97 224.34 216.56 208.43 196.89 187.14 179.91 173.95 167.58

a Estimates are computed from single-point experiments. Estimate of the variance in (10). V [ j j K ( y ) ] are computed from crude Monte Carlo experiments.

Page 13: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

SENSITIVITY ANALYSIS IN STOCHASTIC FLOW NETWORKS 61 7

K = 65536

-.- I

I

0.2 ' I 1 1 I I I I I I

0.46 0.8 0.68 0.8 0.08 0.7 0.76 0.8 0.88 0.9 0.06 rcllablllly q of arc. I . 8 and I2

a-crtlmalor b-crllmalor

Smmplln# rlth rellmhllll~ .74 for .ram I . I mnd 12 mnd .70 lor nmrlnln# .ram

Fig. 2. Estimates of var gK(q)/var g,K(q,p).

implies. For comparison with previous work, the sam- pling plan in [4] took twice as much time per replication than did crude Monte Carlo sampling. As a result, the SPE in Section 2.3 is roughly 2 x (425.6/369.9) = 2.3 times faster than the experiment in [4] for the same number of replications. The quantity {vk,(q)]/ v[g&)]} X (425.6/369.9) measures the overall benefit that derives from the sampling plan based on bounds. For example, for q = .70, crude Monte Carlo sampling would have required 298 observations for each obser- vation using importance sampling.

To test the performance of the sensitivity analysis method, the procedure in Section 5 with set of sampling point candidates Y = 9 and sample size KO = 512 for every pilot run suggested a single MPE in which the capacities of arcs I , 8, and 12 are sampled each with reliability p = .74 and each of the remaining capacities is sampled with reliability .70. The sets X i ( z , q , p ) in procedure SENSITIVITY are equal to 9 for i = I , 8, and 12 and z = 1, 2, while they are empty for the remaining arcs. The experiment with K = 65,536 repli- cations took 561.7 s to estimate the functions g , f, and h for all points q E 9.

Figure 2 graphs the estimates of var g,(q)/var gjK(q,p) for j = a,b and suggests that the estimates gbK(q) clearly dominate go&) and are about as accu- rate as the estimates g,(q) produced by the 21 SPEs. Table I1 lists the estimates t;,(%,4) computed from the SPEs in column 2, the most accurate of the estimates hv5(V,q,p) in column 3 [ij after each number indicates estimate hvK(%,q,p)], estimates of var hK(V,q) in col-

umn 4, along with individual 99% confidence intervals in columns 5 and 6 described by Proposition 3. The data suggest that h(V, ,q) is increasing in q E [.50,.90], that h(V2,4) is decreasing in q E [.50,.90], and that, with high probability, h(V2, q) < .10 for q 1 .88 and h(%,,q) > h(%2,.70). As a result, our goal can be achieved by setting the reliabilities of arcs 1,8, and 12 to 38.

Figure 3 graphs the estimates of var h,(V,q)/var hvK(%,q,p). For either cut, no estimate clearly domi- nates at each q. For each V E r, the estimates with the smallest sum of variance estimates p&V) = CqES V[h,,(% ,q.p)] give the best average performance. Based on this criterion, the aa-estimates dominate for % = V, since Pob(%l) = 1.07 x

Similarly, the aa-estimates dominate for cut V2. Note that the estimates hooK(V2,q,p) and h , b K ( % 2 , q # )

are considerably more accurate than are the estimates h,(%,,q) when q > p , demonstrating the potential of an MPE for producing more accurate estimates than SPEs at several points. The measure ( ( p , p ) in (35) for p(q) = h(V,q) is estimated by 13.8 for V = V, and by 13.3 for V = V2, indicating that the MPE replaces 14 SPEs to produce estimates of h(Vl,q) and h(V2,q) with the same variances on the average when all experi- ments are run for the same time.

In addition to the aforementioned experiments, we ran another MPE with sampling point p = .52 sug- gested by the procedure in Section 5 with optimization of var hvK(Vz,q,p) as the objective. The experiment

= 1.0 x w4, pb,(%l) = 1.51 X w4, and P b b ( % i ) = 1.16 X

Page 14: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

61 8 ALEXOPOULOS AND FISHMAN

2.8

2.4

C = ]516,9,10,121, K = 65536

- 0 -

u r n - - . n n L s o q ' u. . I

O A 0

6 - V."

0 0

0.4 h A A A

a a I

".L ~

0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.83 0.9 0.95 rcllablllty q of arc# 1, 8 and 12

. a n - a l l m a l o r

ba-crUmalor

Samplln# rlth rcl l rbl l l ty .74 for arc* 1, 8 and 12 nnd .PO for rrmalnln# arc#

C = ~118,9110172~1 K = 65536

I .6

,.. a

Q .'

I I I I I I I 1 1 I 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95

Samplla# rllb s4lablUly .74 l o r aro8 1. 13 M d 12 and .70 lor r rmnln ln# ar08

Flg. 3. Estimates of var h,(%,q)/var hi,,(%.q.p).

produced estimates of h(%2,q) with smaller variances on the average, but the estimates of g ( q ) and h(%,,q) had larger variances on the average than did the former experiment with p = .74, demonstrating the value of the approach in Section 5.

We also ran several other sensitivity analysis experi-

ments for the same capacity state space and different configurations for flow intervals, cuts, and sets 9. As expected, multiple sampling points were needed for some cases in which the number of arcs i for which maxqE9 qi - minqE3 qi z .2 was large. For example, the estimation of g , f, and h with respect to common

Page 15: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

SENSITIVITY ANALYSIS IN STOCHASTIC FLOW NETWORKS 61 9

TABLE 11. Estimates' of h(V,q) for varying reliability q of arcs 1, 8, and 12; reliability of each remaining arc = ,170 and K = 65,536; T(S,p) /T(q) = 1.31 V = {5,6,9,10,12}

mates, demonstrating the effect of combining the infor- mation from the state-space decomposition in Section 2.2 with importance sampling.

.50

.52

.54

.56

.58

.60

.62

.64

.66

.68

.70

.72

.74

.76

.78

.80

.82

.84

.86

.88

.90

.4761

.4822

.4900

.4942 ,5023 ,5077 s150 S223 S313 s347 S408 ,5505 3 5 5 S688 S742 .5801 3 9 5 S982 ,6076 .6 I85 .6295

.4793aa

.4848aa

.4925ab

.498 1 ab S038ab S097ab S157ab S219ab S283ab S348ab .54 15ab .5484ab .5555aa S637ba S720ba S797bb S883bb .597 1 bb .6062bb .6156bb .6253bb

.441D-5

.441D-5

.441D-5

.442D-5

.443D-5

.443D-5

.443D-5

.443D-5

.44 1 D-5

.442D-5

.44 1 D-5

.440D-5

.440D-5

.438D-5

.436D-5

.436D-5

.434D-5

.432D-5

.429D-5

.424D-5

.420D-5

.4670

.4755

.4813

.4871

.4932

.4994 SO58 S124 S191 S261 s333 s406 S482 s557 S634 s713 s795 S880 s968 .6058 .6152

.4886

.4940

.4994 so50 S108 S167 S228 s290 s354 S420 ,5487 s557 S628 .5706 S785 S868 S952 .6040 .6130 .6223 .63 19

We would like to thank t w o anonymous referees for sug- gesting several stylistic changes.

APPENDIX

Proof of Lemma 1

One has

Lowef ~~

S O .2725 .2721aa .350D-5 .2638 .2805 .52 .2649 .2646aa .344D-5 .2565 ,2727 .54 .2562 .2568ab .336D-5 .2490 .2648 .56 .2494 .2459bb .331D-5 .2412 ,2567 .58 .2406 .2380bb .323D-5 .2333 .2483 .60 .2326 .2299bb .316D-5 .2251 .2397

.64 .2162 .2131bb .300D-5 .2081 .2218

.68 .I963 .195lba .280D-5 .I902 .2029

.62 .2247 .2216bb .309D-5 .2168 .2309

.66 .2046 .2039ha .288D-5 .I993 .2125

.70 .I871 .I85960 .270D-5 .I809 .I930

.72 .I753 .1765bn .257D-5 .I713 .I828

.74 .I668 . 1 6 6 8 ~ ~ .248D-5 .I464 .1723 -76 .I540 .1562ab .233D-5 .I509 .I618 .78 ,1439 .1453ab .220D-5 .I401 .1509 .80 ,1344 .1341ab .208D-5 .I291 .I397 .82 .1218 .1225ab .192D-5 .1176 .I281 .84 .I098 .1106ab ,176D-5 .lo59 .I161 .86 .0997 .0983~b .161D-5 .0938 .I037 .88 .0861 .0856ab .141D-5 .0813 .0908 .90 .0730 .0725ab .122D-5 .0684 .0774

a Estimates are computed from single-point experiments. Estimate with smallest estimated variance among hUK(%,q,p)

.W confidence intervals computed as in Proposition 3. prcduced from a multipoint experiment with p = .74.

reliability q E {.50 + .02j, j = 0,1, . . . ,40} for all arcs required only two sampling points. In all cases, the sensitivity analysis method produced accurate esti-

c

since q*(v) = n qi*,i is a p.m.f. on 0. The proofs of

the remaining two equations in (22) and (23) proceed i = I

similarly. 8

Page 16: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

620 ALEXOPOULOS AND FISHMAN

Procedure Sensitivity

Purpose: To estimate the functions {&), q E 9}, {f(T,q), q E 9}, and {h(T,q), 9 E 9} for specified flow levels I < u.

Input: Network G = ('V,%) with IS1 = e; nodes s and t in 'V; independent arc capacities B,; flow values I < u ; family of cuts r of interest; set 9 of points q; sampling point p ; points q*; upper bounds g u ( p ) on g ( p ) , g,(q) and gu(q*) on g(q) and g(q*) , respectively, for q E 9; disjoint rectangles U , , . . . , U , with probabilities T,,, rn = 1, . . . , M computed with the p.m.f. p ; c ( q , p ) for q E 9; and number of independent replications K.

Output: {gjK(q,p), VIg,,(q,p)l; q E 9) as unbiased estimates of M q ) , var gjK(q,p); q E 91, {JK(I ' ,q .p) , V[$ , (T ,q ,p) ] ; q E 9} as unbiased estimates of {f(r,q), var&(T,q,p); q E 9} and {h9K(I' ,q,p), V[h&,q,p)l; q E 9) as estimates of {h(I',4), var huK(T,q,p); q E 9) for ij = a,b , as well as S,(q), W,(q) for i = 1,2, S(q) and W ( q ) for q E 9 as an available input to continue sampling.

Method:

Initialization: For each i E 8 :

For each q E 9: For z = 1, . . . ,ni: X , ( z , q , p ) + {4 E 9: qiz Z pi;}.

S ( 9 ) = W ( q ) + 0. For r = 1,2: S,(q) = W,(q) + 0.

On each of K independent replications do: Sample B = ( B l , . . . ,Be) from Q(v,p) in Section 2.3. Determine a maximum flow and its value A(B). For each q E 9: T ( q ) + 1. For each i E %:

For each q E 8:

If I 5 A(B) < u:

For each q E %ei(Bi ,q ,p): T (4 ) + T ( q ) q ; ~ , / p ; ~ , .

S(q) + S ( q ) + T ( q ) ; W ( q ) + W ( q ) + T(q)T(q ) .

For each q E 9:

If Z(%,B) = A(B) for some (e E r: S2W + S*(q) + T(4) ; WAq) + W*(q) + T(q)T(q ) .

For each q E 9: Si(9) Si(q) + T(q) ; Wi(q) + wi(q) 4- T(4)T(q).

Compute summary statistics: For each q E 8:

Page 17: Sensitivity analysis in stochastic flow networks using the Monte Carlo method

SENSITIVITY ANALYSIS IN STOCHASTIC FLOW NETWORKS 621

REFERENCES

C. Alexopoulos, Maximum flows and critical cutsets in stochastic networks with discrete arc capacities. Ph.D. Thesis, Department of Operations Research, Univer- sity of North Carolina at Chapel Hill (1988). C. Alexopoulos, Distribution-free confidence intervals for conditional probabilities and ratios of expectations. Technical Report, School of Industrial and Systems Engineering, Georgia Institute of Technology (1990; revised August 1993). C. Alexopoulos, Computing criticality indices of arcs and the mean maximum flow value in networks with discrete random capacities. Technical Report, School of Industrial and Systems Engineering, Georgia Insti- tute of Technology (1993). C. Alexopoulos and G. S. Fishman, Characterizing stochastic flow networks using the Monte Carlo method. Networks 21 (1991) 775-798. M. 0. Bail, Computational complexity of network relia- bility analysis: An overview. IEEE Trans. Reliability

J. V. Bukowski, On the determination of large scale system reliability. IEEE Trans. Syst. Man Cybernetics

G. Cornuejols, G. L. Nemhauser, and L. A. Wolsey, The uncapacitated facility location problem. Discrete Location Theory (P. B. Mirchandani and R. L. Francis, Eds.). Wiley, New York (1990). P. Doulliez and E. Jamoulle, Transportation networks with random arc capacities. R.A.I.R.O. 3 (1972) 45-60. J. R. Evans, Maximum flow in probabilistic graphs-the discrete case. Networks 6 (1976) 161-183.

35 (1987) 230-239.

U (1982) 538-548.

G. S. Fishman, Principles of Discrete Event Simula- tion. Wiley, New York (1978). G. S. Fishman, Confidence intervals for the mean in the bounded case. Stat. Probability Lett. 12 (1991)

G. S. Fishman, Sensitivity analysis for the system relia- bility function. Probability Eng. lnfor. Sci. 5 (1991)

G. S. Fishman and L. R. Moore, Sampling from a discrete distribution while preserving monotonicity. Am. Stat. 38 (1984) 219-223. H. Frank and I. T. Frisch, Communication. Transmis- sion and Transportation Networks. Addition-Wesley, Reading, MA (1971). D. Goldfarb and M. D. Grigoriadis, A computational comparison of the Dinic and network simplex methods for maximum flow. Ann. Operntions Res. 13 (1988)

A. Itai and Y. Shiloach, Maximal flow in planar net- works. SIAM J . Comput. 8 (1979) 135-150. M. Korkel, On the exact solution of large-scale simple plant location problems. Eitr. J . Operational Res. 39

R. Miller, Simultaneous Statistical Inference, 2nd ed. Springer-Verlag, New York, NY (1981). C. Papadimitriou and K. Steiglitz, Combinatorial Opti- mization: Algorithms and Complexity. Prentice Hall, Englewood Cliffs, NJ (1982).

223-227.

185-2 13.

83-123.

(1989) 157-173.

Received February 26, 1992 Accepted May 21, 1993