monte carlo methods for sensitivity analysis of poisson-driven stochastic systems, and applications

29
Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications Author(s): Charles Bordenave and Giovanni Luca Torrisi Source: Advances in Applied Probability, Vol. 40, No. 2 (Jun., 2008), pp. 293-320 Published by: Applied Probability Trust Stable URL: http://www.jstor.org/stable/20443584 . Accessed: 12/06/2014 20:17 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Applied Probability Trust is collaborating with JSTOR to digitize, preserve and extend access to Advances in Applied Probability. http://www.jstor.org This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PM All use subject to JSTOR Terms and Conditions

Upload: charles-bordenave-and-giovanni-luca-torrisi

Post on 15-Jan-2017

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, andApplicationsAuthor(s): Charles Bordenave and Giovanni Luca TorrisiSource: Advances in Applied Probability, Vol. 40, No. 2 (Jun., 2008), pp. 293-320Published by: Applied Probability TrustStable URL: http://www.jstor.org/stable/20443584 .

Accessed: 12/06/2014 20:17

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Applied Probability Trust is collaborating with JSTOR to digitize, preserve and extend access to Advances inApplied Probability.

http://www.jstor.org

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 2: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Adv. Appl. Prob. (SGSA) 40, 293-320 (2008) Printed in Northern Ireland

? Applied Probability Trust 2008

MONTE CARLO METHODS FOR SENSITIVITY ANALYSIS OF POISSON-DRIVEN STOCHASTIC SYSTEMS, AND APPLICATIONS

CHARLES BORDENAVE,* University of California, Berkeley

GIOVANNI LUCA TORRISI,** Istituto per le Applicazioni del Calcolo 'Mauro Picone'

Abstract

We extend a result due to Zazanis (1992) on the analyticity of the expectation of suitable functionals of homogeneous Poisson processes with respect to the intensity of the process. As our main result, we provide Monte Carlo estimators for the derivatives. We apply our results to stochastic models which are of interest in stochastic geometry and insurance.

Keywords: Importance sampling; marked Poisson process; Monte Carlo estimator; sensitivity analysis; stabilizing functional; stopping set

2000 Mathematics Subject Classification: Primary 60D05; 65C05 Secondary 60G48; 60G55

1. Introduction

Let N be an independently marked homogeneous Poisson process (IMHPP) with points in

Rd and marks with distribution Q taking values on some complete separable metric space M. Under the probability measure Ps, the intensity of the Poisson point process is X > 0. Moreover, let sp(N) be a real-valued functional of the process, and let Ei be the expectation under P . The function X F-+ Ex [yp(N)] is known to be smooth in X under several and different assumptions.

Zazanis [34] focused on functionals depending only on the configuration, up to a finite stopping time, of a homogeneous Poisson process on the half-line. For this class of functionals, he proved that the function X F-+ EJ(p(N)] is analytic under a specific moment condition on the functional, and a light-tailed assumption on the stopping time. However, he did not provide an explicit expression for the derivatives.

For one-dimensional IMHPPs, Baccelli et al. [5] provided sufficient conditions for the mr-differentiability of E [q (N)] with respect to X, in a neighborhood of the origin, and closed form expressions for the derivatives in terms of multiple integrals. However, their method did not address the question of analyticity and their set of conditions was different from ours.

A more general framework was considered in [24]. Let N be a (not necessarily homoge neous) Poisson process on a locally compact separable metric space with intensity measure A. Moreover, let (p(N) be a suitable functional of the process. Molchanov and Zuyev [24] studied the analyticity of the expectation EA [5p(N)] with respect to A. In particular, they proved that, under some assumptions on q, the function A E-* BA [p (N)] is analytic on the cone of positive measures.

Received 2 November 2006; revision received 25 February 2007. * Postal address: Department of Electrical Engineering and Computer Science and Department of Statistics, University of California, 257 Cory Hall, Berkeley, CA 94720-1770, USA. ** Postal address: Istituto per le Applicazioni del Calcolo 'Mauro Picone', Consiglio Nazionale delle Ricerche, Viale del Policlinico 137, 00161 Roma, Italy. Email address: [email protected]

293

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 3: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

294 * SGSA C. BORDENAVE AND G. L. TORRISI

For Poisson processes with a finite intensity measure A, a relevant work is also that of [1], where it was proved that the expectation EA [p (N)] is analytic with respect to a perturbation of A by a semigroup.

In this paper we basically rely on Zazanis' paper [34] for the analyticity of X I + EA [y (N)],

where N is an IMHPP on Rd x M. As our main result, we derive explicit formulae for all

the derivatives of X F-B, E[(p(N)]. These formulae provide Monte Carlo methods for the

sensitivity analysis of suitable Poisson-driven stochastic systems with respect to the intensity of the process.

There are several motivations for being interested in sensitivity analysis: the main reasons are the applications to optimization and control of complex systems occurring, for instance, in stochastic geometry and insurance. Sensitivity analysis was introduced in [16], and has been addressed by many authors (see, for instance, [14] and the references cited therein). There are mainly three ways to handle this problem: the infinitesimal perturbation analysis (IPA), the

likelihood ratio method (LRM), and the rare perturbation analysis (RPA). We refer the reader to [19] and [32] for more insight into the IPA method, and to [30] for more details on the LRM. It is worthwhile to mention the work of Decreusefond [11], where, using Malliavin calculus, it was shown that the IPA, RPA, and LRM could be seen as a part of the stochastic calculus of variations. The main achievement of Decreusefond's paper was that he was potentially able to consider discrete-event systems to be more general than Poisson processes.

As already mentioned, we derive explicit formulae for all the derivatives of X ~-*E B [-P(N)]. For this we use the RPA method. Suppose that we wish to compute the derivative dE [so(N)]/dA. We distinguish two different RPA methods: the virtual and the phantom. The virtual RPA method may be attributed to [29], and has been revisited in [4]. Following the ideas of these papers, we evaluate the limit

lim EX+AX[p(N)] - B[Jp(N)] AX-+o AX

The key idea is to use the superposition property of IMHPPs to generate an IMHPP of intensity X + AX from a small perturbation of an IMHPP of intensity A. By a coupling argument, an IMHPP of intensity X + AX is generated from the superposition of two independent IMHPPs of respective intensity X and AX. The phantom RPA method was introduced in [8]. Following the approach in this paper, we compute the limit

lim Bx[q(N)] -ExAx[((N)] AX-*O AX

The idea is to use the thinning property of IMHPPs to generate an IMHPP of intensity X - AX:

similarly to the previous case, this process is generated from a small perturbation of an IMHPP

of intensity X by a coupling argument. We generalize this approach to compute the nth-order derivatives dn EBx[p(N)]/dXW.

Our results can be applied to suitable functionals of random sets arising in stochastic

geometry. Furthermore, by using importance sampling and large deviations techniques, we show that our results can be applied to ruin probabilities of risk processes with Poisson arrivals and delayed or undelayed claims. In the case of classical risk processes (undelayed claims) we

provide an asymptotically optimal Monte Carlo estimator for the first-order derivative of the ruin probability.

The paper is organized as follows. In Section 2 we fix the notation and extend Zazanis' result about analyticity of functionals of homogeneous Poisson processes. In Section 3 we

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 4: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA * 295

state our results about nth-order derivatives of functionals of homogeneous Poisson processes. In Section 4 we prove the results given in Section 3. Finally, in Section 5 we apply our results to stochastic models which are of interest in stochastic geometry and insurance.

2. Preliminaries

2.1. Notation Let d > 1 be an integer, let M be a complete separable metric space, and let XM be the space

of all counting measures on Rd x M, defined on the Borel au-field 2(RdJ) 0 23(M), such that each measure At E X is simple and locally finite, that is, ,t({(x, z)}) is equal to 0 or 1 for each (X, Z) E Rd x M and g is finite on each set of the form B x M, where B is a bounded Borel set. We endow the space XK with its usual topology (see, for instance, [10] for the details). Any

measure in ,M can be represented as

i (Rd X M) (iRd X M)

Pt = ~E 8(Xn,Zn) = Y X 3Xn (1)

n=1 n=1

where (xn, Zn) = X, E Rd x M and S(x,z), (x, z) E IRd x , iS the Dirac measure on23(IRd) 8) 2 (M): for any B E 2(Rd) and M E 23(M), (xy,z)(B x M) is equal to 1 if (x, z) E B x M and equal to 0 otherwise. The elements of the sequence {Zn}n> 1 C M are called marks. The symbol supp(,t) will denote the support of the counting measure Pt E JV, that is, if Pt is given by (1), supp(,u) = {(Xn, Zn)n>1}.

Let Br be the closed ball centered in 0 with radius r, and let B(x, r) = x + Br be the closed ball centered at x with radius r. If K is a compact set then, throughout this paper, we denote by BK the smallest closed ball centered in 0 which contains K.

Let P = En>l 8Xn = En>l 8(Xn,zn) be in X. For B E S(IRd) and M E S(M), we set

JBxM VJ(x)PA(dx) = I *(x, z)ut(dx x dz) = L 1(xn, Zn)8(xn,zn)(B x M) BxM BxM n>1

for any measurable functional ifr: Rd x M -* lIR such that the sum is well defined. Let B C Rd be a Borel set. Throughout this paper, we denote the set of points of supp(,t)

in B x M by PtIB and the number of points of ,t in B x M by PtB* With an abuse of notation,

I B I denotes the Lebesgue measure of B and, for real numbers x E iR, the symbol Ix I denotes the usual absolute value.

For any measurable functional p: X -> IR and x E iRd x M, we define the increments

D+ (p(t) = (? + Ax) -Po(A) and Dx cp(At) = (A) - (p(A -Sx),

where Dxp(,t) is properly defined only if x E supp(Pt). Similarly, if Pt' E J,

D>+ p (,u) =

(p, + Pt) - (p () and D/1,qc (,t) = p (A) - 'P (,u - t),

where Dfi (,t) is properly defined only if supp(,t') C supp(,t). Let (Q, Y, P) be a probability space. A (simple and locally finite) marked point process on

Rd with marks in M is a measurable mapping from Q to A. Throughout the paper, we fix a marked point process N on iRd with marks in M. For a Borel set B C Rd, define the following a-field on Q:

FB = a{N(C x M): C E S(Rd), C C B, M E S(M)).

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 5: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

296.* SGSA C. BORDENAVE AND G. L. TORRISI

Let F and 1K respectively denote the family of closed and compact sets of Rd. We endow these families with their standard topology (see [22] and [31]). Let S: XA -I F be a measurable mapping. We say that S is a stopping set if S(N) is a measurable mapping from Q to 1K such that {S(N) C K} IE K for each K E 1K. The stopping cr-field is the following collection on Q:

s= or{ FE V SFK: Fn{S(N) C KI E FK for all K E K.

KeK

For details and properties of stopping sets and stopping ac-fields, we refer the reader to [36].

All the random elements considered in this paper are defined on the measurable space (Q, .T). We endow such a space with the family of probability measures {Pxx>,o such that, under Pa, the marked point process

N = LS(Xn,Zn) = 8xn n>1 n>1

is an IMHPP of intensity X > 0, that is, the ground point process {Xnln>l is a homogeneous Poisson process with intensity i, the random marks {Zn1n>l are independent and identically distributed (i.i.d.) with law Q, and the sequences {Xn}n>l and {Zn}n>l are independent. We denote by EA the expectation associated to Pa. Note that N is actually a Poisson point process on Rd x M with intensity measure XA, where

A(dx x dz) = dxQ(dz)

is the product measure on iRd x M of the Lebesgue measure and Q.

Although in Section 2.2 and Section 3 we assume that {Xn}n>l is a homogeneous Poisson process on E = IRd, the results therein still hold if E is a Borel subset of iRd and {Xn }n>I is the restriction on E of a homogeneous Poisson process on Rd (for instance, note that in Section 5.2 we apply the results of Section 2.2 and Section 3 to stochastic models where {XnIn>j are the points of a homogeneous Poisson process on [0, oo)).

2.2. Analyticity of functionals of IMHPPs

Our analysis is based on a result, due to [34], which can be extended to the context of stopping sets as follows. Let so be a measurable functional from XA to IR, let f (X) = EX [(p (N)], let f (n) (X) = dn f (X)/din, and let [a, b) be an interval of the positive half-line. We consider the following assumptions.

There exists a stopping set S such that (p(N) is Fs-measurable. (2)

For any X E [a, b), there exists y = y(X) > 1 such that Ex [I4p(N) Y] < o0. (3)

For any X E [a, b) there exists s = s(X) > 0 such that Ex [exp(sIBs(N)I)] < oo. (4)

The following theorem holds.

Theorem 1. Assume that (2), (3), and (4) hold. Then f ( ) is analytic on [a, b), that is, for a

fixed xo E [a, b), we have

f (x) f(n) (

t) (x-) x)n' x E [a, b). n>O

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 6: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA * 297

Zazanis [34] considered a homogeneous Poisson process N on the half-line and stopping sets of the form S(N) = [0, T(N)], where T(N) is a stopping time with respect to the natural filtration of the Poisson process. Moreover, he assumed the following stronger condition in place of (3):

Ex [p4(N)] < oo for any X E [a, b).

To prove Theorem 1, we need the following lemmas.

Lemma 1. Under assumptions (2) and (4), for all X E [a, b), there exists s' = s'(X) > 0 such that E[exp(s'Ns(N))] < 00.

Lemma 2. Under the assumptions of Theorem 1, we have

Ex[Iq(N)I(I ) exP(CIBS(N)I)] < 00, E (o,min{{ 1 s y 1)

Here y is given by assumption (3), s is given by assumption (4), and s' is determined by Lemma 1.

Proof of Lemma 1. For ease of notation, throughout this proof, we write S = S(N). Let s

be given by assumption (4), and set C = Ex[esISI] and a > e2X. For k > O,-let rk be such that

IBrk l = k/S (that is, rk = (k/17rd)l/d where 2rd is the volume of the ball B1). We note that

PA (Ns > k) < Px(I Bs I > + Px (NBr > k) for all k > 0. (5)

By a standard large deviation estimate for the Poisson distribution (see, for instance, [26, Lemma 1.2]) we have, for all k > e2X,

PX (NBr > k) < exp (- log( 1)) = exp (- log (A )) (6)

Therefore, by (5), (6), and Markov's inequality, it follows that, for all k > e2X,

Px(Ns > k) < C exp(-k) + exp(-2 log(A))

Finally, we easily deduce that, for 0 < s' < min{s/8, 1 log(6/X)}, Ex[exp(s'Ns)] = 1 +

(exp(s') - 1) k>O exp(s'k) Px (Ns > k) < 00.

Proof of Lemma 2. As above, we set S = S(N). The proof is similar to the proof of Lemma 2 in [34]. Following the proof of Lemma 2 in [34] and using Hblder's inequality (in place of the first application of the Cauchy-Schwarz inequality) and then the Cauchy-Schwarz inequality, we have

Ex [ (N)( + )NS exp(IB I)]

_ X+ E \(yl(y-1))Ns -

A(y-l)/y < (Ex[k5p(N)Iy])lly (Ex ( ) expQ _ IBsl

< (Ex[Ikp(N)IY])lIY (Ex [( A ) ] Ex [exP(C Y 1 IBsI)]))

The claim follows by Lemma 1 and assumptions (3) and (4).

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 7: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

298 * SGSA C. BORDENAVE AND G. L. TORRISI

Proof of Theorem 1. Zazanis' [34] result can be extended as stated by Theorem 1. We briefly outline the main changes in the proof: follow the proofs of Theorem 4 and Corollary 2 of [34] replacing Lemma 2 in [34] with Lemma 2 and the so-called Cameron-Martin-Girsanov change of measure by the change of measure

dPa,s a exp(-IS(N)I( -a)), (7)

where P.,s denotes the restriction of Px to FS (note that P. <? Pa on the stopping a-field Fs with density (7) due to the results in [36]).

Remark 1. A function g: IR -- [0, oo) is said to be absolutely monotonic in [a, b) if it has derivatives of all orders that satisfy g(k) (x) > 0 for all x E (a, b) and k > 0. Consider a nonnegative functional (p(gt, A) on ,N x R+, which depends explicitly on X in such a way that, for each ,t E X, the function X c p(,u, A) is absolutely monotonic in [a, b). If,

moreover, assumptions (2), (3), and (4) are satisfied with (p(it, A) in place of qo(1it), then f (X) =

Ex [( (N, X)] is analytic on [a, b). The proof is similar to that of Theorem 1. In particular, note that the absolute monotonicity of X -q(N, X) implies the absolute monotonicity of

S XNS(N) X v-- qp(N, () exp(- S(N) I -a))

in [a, b). Indeed, similarly to [34], we can prove that the function

(X NS(N)

- exp(-IS(N)I(X-a))

is absolutely monotonic. The claim follows using the fact that the product of two absolutely monotonic functions is an absolutely monotonic function.

3. Rare perturbation analysis

Sensitivity analysis is concerned with evaluating derivatives of cost functions with respect to parameters of interest. It plays a central role in identifying the most significant system parameters. In this section we give Monte Carlo methods to estimate the derivatives of the cost function f(X) = E,[p(N)]. An application of importance could be the use of such gradient estimates in stochastic gradient algorithms to find the optimal value Xo that minimizes the cost function.

3.1. Monotone mappings

The following notion of monotonicity is crucial in this work.

Definition 1. Let S be a measurable mapping from cN to F. We say that S is nonincreasing or nondecreasing if, for any ,ul, I2 in X, the inclusion supp(/ti) C supp(jL2) implies that

S(011) D S(pD) or S(jlt) C S(1k2), respectively. The mapping S is said to be monotone if it is

nonincreasing or nondecreasing.

We give a couple of examples as a guide to intuition. Let it = {x,),>1 be a locally finite counting measure on [0, oo). Define the functional qp(A) = 1 A xI, where xl is the first point of It on [0, oo), and define the measurable mapping S(,u) = [0, xl]. Then S is nonincreasing,

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 8: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA.* 299

but it is not nondecreasing. Instead, if we define S(,u) = [0, 1] then S is nonincreasing and nondecreasing.

3.2. First-order derivative

In this subsection we state the result concerning the first-order derivative of f (.). Its proof is given in Section 4.

Theorem 2. Under the assumptions of Theorem 1, with y given in (3) such that y > 2, and if, moreover, the mapping S is monotone then, for all X E [a, b),

f'(X) = Exj[S(N)IDx p(N)] (8)

= Ex[ NSN) DX'?o(N)] (9)

where X = (t, () and X' = (t', (') are random variables on Rd x M. Given S(N), t is uniformly distributed on S(N); ? is independent of N and t and has law Q. Given the collection of points NIS(N), X' = (k', (') is uniformly distributed on the collection.

The closed-form formulae provided by (8) and (9) both give a Monte Carlo method to simulate the derivative of f (X). We also note that if we consider an IMHPP on (-oo, 0] with marks in (0, oo), and the assumptions of Theorem 2 hold, our formulae for the first-order derivative coincide with the corresponding formula in [5] (see [5, Equation 10] with k = 1). This easily follows by equality (9) and the forthcoming equalities (37) and (38). A similar remark holds for the nth-order derivatives (see Theorem 3, below).

3.3. Higher-order derivatives

We now generalize Theorem 2, stating the result for the nth-order derivatives f (n) (X). The details of the proof are given in Section 4.

Let sp be a measurable functional from X to R. As in [6] and [29] for ,u E cN, n > 1, and Xi = (xi,Zi) E Rd x M, i = 1, ..., n, define

...,xn () =(Xi. Xn)( l)nk E (10)

k=O 7rE{(n)I i E

where 8(xI . ... xn) = 1({xI .... xn are distinct)) and {(n) denotes the collection of all subsets with cardinality k of { 1, ... , n }. We will also consider the functionals

(pXI,Xxn(,)=(X1,* Xn)xnxi. xn (p E6,xi)'

(11)

which are properly defined only if xl . . X Xn E supp(,ut). Note that Poxj,._x, (and therefore cpX1.Xn) is invariant by permutations in the sense that, for any permutation or of { 1, .. ., n},

5PXo(l),....X<y(n) (A) = .>xi,..,xn (11). Furthermore, as can be easily seen, reasoning by induction on n > I,wefindthatifr(xl, ...x?n+l) = 1 then

.P ?nI = I (it + 8xn+ I 'Pxl , ,xn (it) = -

+ + I x ) = ... Xn (12)

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 9: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

300 * SGSA C. BORDENAVE AND G. L. TORRISI

and qXl ,...,Xn+l (iXj)= - , _Xl-,Xn (I- Xn+l )=D8 (P)Xj,..X.Xn (it) (13)

In particular, q?xI (it) = Dx+ (it) and 50X1 (,u) = Dx- q0(,U). In the following theorem we use the

standard convention that the sum over an empty set is 0 and k!/(k - n)! = 0 for n > k.

Theorem 3. Under the assumptions of Theorem 2, for all i E [a, b) and n > 1,

f (nX) = Ex[IS(N)In(pX1 ..... Xn (N)] (14)

= E; [( ) (N)] (15)

- E, [ (NS(N)-n)! n ( n(N)] (16)

where, for 1 < i < n, Xi = (ti, (i), Xl = (4', ()' and X" = (j", /') are random variables on iRd X . Given S(N), (ti) <i <n are independent and uniformly distributed on S(N), and

independent of N; (W) vI<n are independent, independent of N and ( )I < <<n, and have law Q. Given the collection of points NI S(N), (X) <i <n are independent and uniformly distributed on

the collection; {X'X, . X} is uniformly distributed on the set of subsets of n distinct points

of NIS(N)

Note that (16) implies that f (n) (X) = 0 if NS(N) < n with probability 1.

Putting together Theorems 1 and 3, we obtain the following corollary.

Corollary 1. Under the assumptions of Theorem 2 and the notation of Theorem 3, for all X E [a,b),

f (i) =Ea[L (-a) (N) X.,pX' (N)]

4. Proofs of Theorems 2 and 3

4.1. Integrability lemmas

In the core of the proof of Theorems 2 and 3 we use the integrability of some functionals. In this subsection we prove such integrability results.

We start with a simple continuity result.

Lemma 3. Under the assumptions of Theorem 1, for all a e [0, y (X)) the function i'-*

Ex,[Ip(N)I'] is defined in an open neighborhood of X and is continuous at L.

Proof. Throughout this proof, we set S = S(N). The conclusion is trivial for a = 0. Assume that al > 0. We prove that

lim EA+,[1(p(N) 1] = E[I(p(N)I]. (17)

A similar argument can be used to prove the same limit as ? -* O+. Let s = s (X) > 0

be given by assumption (4), ,8 = y/a > 1, and e E (min{-i, -s(8 - 1)/,B}, 0). By the

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 10: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA 301

Cameron-Martin-Girsanov change of measure (7), it follows that

Ex.+J[l5(N)1 = Ex [ 1(N)Ia A ) e )N SI]

By the choice of ? we have

J ap(N)0 5e-81sl < IS!,(N)lae`elsl < Jlp(N)1'Xexpt

Now, Holder's inequality and assumptions (3) and (4) give

F /~~s(f3- 1)IS N EA. [I (N) I O'exp ( i < EA [Ip(N) IY]a/Y EA esI1sl](0)/ < 00.

The limit (17) is then a consequence of Lebesgue's dominated convergence theorem.

For any ,u E X, n > 1, and xi = (xi, Zi) E IRd x M, i = 1, . . . , n, define the functionals

(it) I |' 10._ Xn (.c)

A (dxIA ) * A(dxn)

(dXMl)n

and (with the convention that the sum over an empty set is 0)

X (it) = E 15>,x ,-Xn (it)

{x1.xn }csupp(g)

where the sum is taken on sets of n distinct points of tt. The following lemmas hold.

Lemma 4. Under the assumptions of Theorem 1, and if moreover, the mapping S is nonin creasing then, for all A E [a, b) and c- E [1, y), E, [ lr(N) ] < oo.

Lemma 5. Under the assumptions of Theorem 1, and if, moreover the mapping S is nonde creasing then, for all X E [a, b) and a E [1, y), EX[X (N)a ] < oo.

Proof of Lemma 4. For ease of notation, set P = Pa and E = Ex. Let q > 1 be such that qcx < y and let p > 1 such that l/p + l/q = 1. Moreover, let N = En>1 6(4 z be an IMHPP with intensity AX,, such that Z1 has law Q and N is independent of N. Here AX\ is chosen so that X + AR < b and E[exp(2paA/XIS(N)I)] < oo. Reasoning by induction on n > 1, we find that assumption (2) and the monotonicity of S imply that

5p xjx(N)=O forany(x1.,xn)VS(N)n. (18)

Indeed, for n = 1, the Y's-measurability of p(N) and the inclusion S(N + 8x) C S(N) for all x = (x, z) c Rd x Mimply that y(N + Ax) = - (N) foreachx = (x, z) E (IRd \ S(N)) x M. The general case is proved similarly. Therefore,

+(N) = /kN xM x X (N)jI A(dxl).. A(dXn). 1/O (N TXM.. rn

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 11: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

302 * SGSA C. BORDENAVE AND G. L. TORRISI

By the superposition property of Poisson processes, N + N is an IMHPP with intensity X + AX. It follows that

f(N) < IS(N)t 3E (n)IS(N)K k J l9(N + 3,j A(dXl) * A(dXk)

< jS(N)In L (k) E[jk(N + N)I NS(N) = k, N]

< I S(N)I'n E[jq4 N + N)J I N] >E (k:) P(NS(N) =k I N k=O

<jiS(N)jn E[Iy(N + N)j I N] 5? (i:) (AX IS(N)I)k eAS()

= kn eAXSNI EHjcp(N + N)I I N] s (N) )nN k

( k=Ok= ( n!~~~~~~~~

<A)! e2tAXS(N) E[I(N + N)j A N]

Using Jensen's inequality and Holder's inequality, we deduce that

E[Ifr(N)a] < ([(1N ) a E[e2aANis(N)I (E[NS(N + N)I I N])a]

<S( n ) E H E[i(N + N)j I N]]

<00(A!) E[e2PAIS(N)I]l/p E[lS(N +N)nk

Proof of Lemma 5. Set P = Pv, and B = EA, and let N?'n and N?@n) respectivelyb h e

~~~e JS(N)l E[I~~~~p(NS(N Ny bethe se

of the n-tuples of n distinct points of N and NIS(N)- Let p, q > 1 be such that axq < y and i/p + 1l/q = 1. Let {B% ha> be an i.i.d. sequence of Bernoulli random variables, independent of N and defined by

P(fn =0) = 1 -P(/, = 1) =

Consider the thinned IMHPP of intensity X- AX given by N = En>1 ,8n8(Xn,Zn). Let s > 0 be such that E[exp(sNS(N))] < oo (see Lemma 1). Here we choose AX in such a way that 2pa log(X/(J - AX)) < s. Reasoning by induction on n > 1, it can be proved that assumption (2) and the monotonicity of S imply that

X.Xn(N)=O forX1...,XEsupp(N):X .*,X2 IS(N)* (19)

Indeed, forEn = 1, the Js-measurability of E(N) and the inclusion S(N-x) C S(N) for all X-(X, Z) < supp(N) imply that 2(N-ASx) = (N) for each X e supp(N) such that

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 12: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA * 303

X E Rd \ S(N). The general case is proved similarly. Therefore,

X(N) = I ko.Xn(N)I (20) {X*,...,X*}csupp(N)

= 1 jq, I(W9Xt ..... ,X* (N)I (21) (X*,.XX)EN0n

- 1 E I CPX1 s Xn (N)I (22)

(Xt,.,X*)EN0n) nX 3wneIS(N)

1 ~ n X' i

- E[(Ns(N))nbpXi.n (N) I I N], (23) n !

where the equality in (21) follows from the invariance by permutations of qpli.xn (it), the equality in (22) follows by (19), and the equality in (23) follows by the definition of (XW)I< <n (see the statement of Theorem 3). If NS(N) < n then x (N) = 0. On the other hand, if NS(N) > n, we deduce that

X(N) < E[(Ns(N))flkpXi .X (N)I N] - n ~~~~~~~k

<(NS(N)) nE[L L (p(N-L8x)| N] -k=O {il.-ikEC{(n)} j=1

= (NS(N) )n (n) E[1q(N)l I N, NS(N) - NS(N) = k]

<NS S(N)) E[I(p(N)l I N] E (k) P(NS(N) - NS(N) = k I N)

= (NS(N) )n E[ (N)j I l N] , (in) k! (NS(N) - k)! (?jk(1 A)k-NS(N) k=O k NS(N)!

/ \ XNS(N)

< K(Ns(N))n _AX) E[I(p(N)I I N],

where K = (X/AX)n En-0 k! (n). Finally, using Jensen's inequality, H6lder's inequality, and Cauchy-Schwartz's inequality, we obtain

F ! X\ ~~~aNS(N) 1

E[X(N)a] < Ka EL(NS(N))(\X X ) -IAp(N)

I'

F /' X PaNS(N)- 1/p < KOE (NS(N))Pan X - ) J E[j(p(N) qc]l/q

2PUNS(N) - 112p

< K' E[(Ns(N)) 2p,n ]112P E <- E[Js(p() lqc]l/q

<oo

4.2. Case of nonincreasing mappings In this subsection we prove the closed-form formulae given by (8) and (14) in the case of

nonincreasing mappings S. More precisely, the following propositions hold.

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 13: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

304.* SGSA C. BORDENAVE AND G. L. TORRISI

Proposition 1. Under the assumptions of Theorem 2, and if, moreover the mapping S is non

increasing then, for all X E [a, b), (8) holds.

Proposition 2. Under the assumptions of Theorem 2, and if, moreover, the mapping S is non

increasing then, for all X E [a, b) and n > 1, (14) holds.

We start by proving Proposition 1. The proof is based on the virtual rare perturbation method

considered in [4].

Proof of Proposition 1. For ease of notation, we set P = P; and E = Es . A straightforward

computation gives

E[ID+~(N)j I N] ( I ID+p(N)IA(dx) x J~~~S(N)j J(N)xMI I S(N)I LdXM I4(qN)IA(dx) (24)

almost surely (a.s.), where the latter equality follows by assumption (2) and by the assumption

that S is nonincreasing. Indeed, as in the proof of Lemma 4, the Ts-measurability of q0(N) and the inclusion S(N + Ax) C S(N) for each x E Rd x M imply that p(N + AX) = q(N) for each

X = (x, z) E (Rd \ S(N)) x M. Thus, the integrability of the random variable I S(N) I Dp(N) follows by Lemma 4. Now, as in Lemma 4, let N = Zn> 8 be an IMHPP with intensity

AX\ such that ZI has law Q and N is independent of N. By the superposition property of Poisson

processes, N + N is an IMHPP with intensity A + A/X. Here we choose small enough AX\ so

that X + AX, < b. Owing to the monotonicity of S, we have S(N + N) C S(N), and so the

Ys-measurability of q(N) yields

~p(N) = q(NIS(N)) and q(N + N) = p((N + N)IS(N))- (25)

We then note that

f (X + AX) - f (X) E[DZp(N)]

AXE [4j, ' (NS(N) k)NIS(N) -k> NI(N

1 = AS E[1(Ns(N) = 1)Dt qp(N)] (26)

AX NIS(N)

1 + E[1(NS(N) > 2)Dt q(4N)], (27)

AX - NIS(N)

where the second equality follows noticing that by (25) on {NS(N) = 01 we have q(N) =

(p(N + N). Fix a E (2, y(X)), by Lemma 3, the function X' + E[Jp(N)(a] is continuous at X. Therefore, there exists a positive constant C > 0 such that E[JIo(N) a] < C' and

E[J1p(N + N)I'] < C' for small enough AX. Using Holder's inequality and Minkowski's

inequality, we have

I E[1(NS(N) > 2)Dt yp(N)] j < (P(NS(N) > 2))li/`(E[Iq9(N + NIS(N))-99(N)I] NIS(N)

2C (E[L AX)kIS (N)Ik I-l < 2C (E e (E[lS _ A (j2S(N)8)

< 2C(A ')2(I -l/a) (E[j S(N) 12]) 1-1/a (28)

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 14: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methodsfor sensitivity analysis SGSA * 305

By assumption (4) we have E[I S(N) 12] < oo. Therefore, by inequality (28), it follows that the term in (27) goes to 0 as AX\ - 0. Since N is independent of N, it follows that

E[1(NS(N) = 1)Dt cp(N)] = E[E[l(NS(N) = I)Dt (p(N) I N]] N1S(N) NIS(N)

= E[AX\IS(N)je AIS(N)I E[Dt sp(N) I N, NS(N) = 1]] NIS(N)

= E[AXI S(N)Ie- AIS(N)I E[Djsq(N) I N]] = l\R E[JS(N)le- AXS(Nf)l D+Xo(Nfl.

Thus, by the dominated convergence theorem, the term in (26) converges to E[I S(N) i D+ p(N)] as AX- 0.

Proof of Proposition 2. Set E = Ex, and note that by (18) we have

f(N) = E[I S(N) IInp'X1,..,Xn.(N)I I N].

Thus, the integrability of IS(N)IncpX. xn(N) for any n > 1 follows by Lemma 4. We prove (14) by induction on n > 1. As already shown, it holds for n = 1. Let V be the functional defined as 4 without the absolute value. By (2) and (18), it follows that r(N) is

Fs-measurable. Assume the inductive hypothesis f(n) (X) = E[! (N)] for n > 1. Fix a E (2, y), by Lemma 4 we have E[I4(N)Ia] < oc. Define the random variable Xn+I =

($n+j, (n+j) with values on IRd x M as follows: given S(N), tn+i is uniformly distributed on S(N) and is independent of N and XI, ... . Xn; n+j has law Q and is independent of N,

X,I. , Xn, and tn+i1 By Proposition 1 we obtain

f (n+l) (X) - E[IS(N)IDj +1 4(N)].

The conclusion follows noting that, by (12) and (18), we have

E[IS(N)IDX+1 (N)] = d E[D+(N)]A(dX)

= |f~dxM)n+1 E[?xl.j..1Xn+1 (N)]A(dxl) .. A(dxn+l)

= E[IS(N) In+I sX' ... xn. l (N)]

4.3. Case of nondecreasing mappings

In this subsection we prove the closed-form formulae given by (9) and (15) in the case of nondecreasing mappings S. More precisely, the following propositions hold.

Proposition 3. Under the assumptions of Theorem 2, and if, moreover, the mapping S is non decreasing then, for all X E [a, b), f '(X) equals the term in (9).

Proposition 4. Under the assumptions of Theorem 2, and if moreover, the mapping S is non decreasing then, for all X E [a, b) and n > 1, f (n) (X) equals the term in (15).

We first prove Proposition 3. For this we use the so-called phantom rare perturbation method introduced in [8].

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 15: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

306 . SGSA C. BORDENAVE AND G. L. TORRISI

Proof of Proposition 3. Set P = P, and E = E . As in the proof of Lemma 5, the

Fs-measurability of 4(N) and the inclusion S(N - Sx) C S(N) for all X = (X, Z) E supp(N) imply that o(N - Sx) = qo(N) for each X E supp(N) such that X E Rd \ S(N). Therefore,

JS(N)M IDq4(N)IN(dx) =DTh9(N)IN(dx). (29) S(X x d xM

Thus, the integrability of the random variable S(N)xM D -

(N)N (dx) follows from Lemma 5. Now note that

E[ D D4(N)] = E[E[ A D_,jo(N) NIS(N)]]

=E[ DP p(N)N(dx)]. (30) ;S(N) xM

Finally, we show that

f'(X) = E [ Dj p (N)N(dx)1. _X S(N) xM

Let {Pn% n> 1 be the sequence of Bernoulli random variables defined in the proof of Lemma 5. Consider the thinned IMHPP of intensity X - AX given by N = Ln>I fn 8(Xn,Zn). By assump tion (2) and the monotonicity of S, it follows that (p(N) = (p(NIs(N)) and p(N) = p(NIS(N)) By the independence of {f nIn>I and N, we have, for 0 < k < NS(N),

P(NS(N) - NS(N) = k I N) = (NS(N)) (AX)( - ( )NS(Nfk

This equation implies that

E[1(NS(N) - NS(N) = k)(@o(N) - q(N))]

- E[( N))(XA)( - A )NS(NYk

x E[(P(N) - o(N) I N, NS(N) -NS(N) = k]]

Since p(N) = p(NIS(N)) and p(N) = q(NJS(N)), we have E[1(NS(N) -NS(N) = 0)(qp(N) -

p(N))] = 0. Therefore,

f(X) - f(X - AX) AX

E[qo(N) - p(N)]

AX

- ,AX E[ l(NS(N) NS(N) = k)((p(N) - p

1 F (~AX\ AX\NS(N)1 -

1 [NS(N) ) ) E[p(N) - -p(N) N, NS(N) -S(N) = 1]

(31) 1

+ E[1(NS(N) - NS(N) > 2)((p(N) - y(N))]. (32) AX

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 16: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA o 307

We note that, given N and the event {NS(N) - NS(N) = 1}, the law of the random variable po(N) is equal to the law of cp(N - Sx'). Thus,

E[(p(N) - p(N) I N, NS(N) - NS(N) = 1 ] = J (p(N) - q(N - )x))N(dx). NS (N) S(N) x M

(33) By the dominated convergence theorem and (33), it follows that, as AX -+ 0, the term in (31) goes to

E[1 J DP 9p(N)N(dx)]. X S(N) xM

The proof of the proposition is complete if we prove that the term in (32) goes to 0 as AX\ - 0. Fix a E (2, y(X)), by Lemma 3, the function X' -> E[Iop(N)Ia] is continuous at X. Therefore, there exists a positive constant C > 0 such that E[1so(N)IJ] < C' and E[kp(N)Ja] < C' for small enough AX. Using H61der's inequality and Minkowski's inequality, we have

E E[1 (NS(N) - NS(N) > 2) ((p (N)-(P (-))] I

< (P (NS(N) - NS(N) > 2)) 1-1/ (E [o (N)- (- ) 1]) 1/

2C ([EN(N) ( N)(A)k( -k AX )NS(N)-k] 1/aV(34

-k 2

As can be easily checked, for any n > 2 and p E (0, 1),

E/n<pm}P J _ p)n-m < -n2p2. (35)

Thus, by (34) and (35), the absolute value of the term in (32) can be bounded from above by

2C (1 (AX 2ErN2]) /

AX \2 \X ) S(N)?

and this quantity goes to 0 as AX -- 0, since E[N2S (N)] < oc by Lemma 1.

Proof of Proposition 4. Again, set E = Ex. By (20)-(23), we obtain

E[( nAN)) IXI.Xf(N)I N]=jN).

Thus, the integrability of (Ns(N)/X)npX I. n(N) for any n > 1 follows by Lemma 5. We prove (15) by induction on n > 1. By Proposition 3, (15) holds for n = 1. Let X be the functional defined by

n= XI,,n(g,)

{X1.xXn}CSUpp(/I)

Let Nl?n denote the set of the n-tuples of n distinct points of NIS(N). Since JS(N)deoet

(Nf) = _n ~IE Xn(N)

(XI X)GNS(N)

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 17: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

308 * SGSA C. BORDENAVE AND G. L. TORRISI

(see (20)-(22)), we know that X (N) is Fs-measurable. Moreover, for each n > 1,

x(N) = E S nX(N) N] (36)

(see (22)-(23)). Assume the inductive hypothesis f(n) (X) = E[j (N)] for n > 1. Fix a E (2, y). By Lemma 5 we have E[LI(N)I'] < oo. Let X' be a random variable on Rd x M such that, given NIS(N), X'+1 is independent of (X')I<i<n and uniformly distributed on the collection NIS(N). By Proposition 3 we obtain

f(n )i) = E[ NN) Di j(N)]

The conclusion follows noticing that, by (13) and (36), we have

E[DX, X(N) I N] n+1

E[(NS(N)) (x...,X'(N) _X1.,X/ (N X )) N]

=E[( nAN) )qixi.Xn(N) - N] .

4.4. Proof of Theorem 2

For ease of notation, we again set E = E.. In view of Propositions 1 and 3, it is sufficient to show that

E[1S(N)jD+eo(N)] = E [N) Dxf(ON)]

Arguing as for (24), we have

E[D+((N) I N]= D+I | pN)A(dx) a.s. x ~~~IS(N)l JI dxM

X

Therefore,

E[jS(N)jD+(p(N)] - f E[D+1p(N)]A(dx). (37) x ~~~d X

On the other hand, using the same argument as for (29) and the Slivnyak-Mecke theorem (see, for instance, [10]), we obtain

E [ J DxiP(N)N(dx)] = E [ DxIP(N)N(dx)] S(N) XM rd ~~~~xm _

= LA XM E[D+p(NfA(dx). (38) kdxm

The conclusion follows by (30), (37), and (38), which does not depend on the monotonicity of S.

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 18: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA * 309

4.5. Proof of Theorem 3

As usual, set E = Ex. Let 4l and X be the functionals defined in the proofs of Proposition 2 and Proposition 4, respectively. Equations (14)-(15) will follow if we prove that

E[4r(N)] = E[Xi(N)]. (39)

Indeed, by the proof of Proposition 2, if S is nonincreasing, we have f (n) (X) = E[if (N)], and, by the proof of Proposition 4, if S is nondecreasing, we have f (n) (X) = E[j (N)]. Equality (39) follows since, by the extended Slivnyak-Campbell theorem (see [25]) and the invariance by permutation of spX1.Xin (it), we have

E[j (N)] nE E (pI ,.,n (N)] {X1,...,X*}Csupp(N)

h=|M)n[ oxi-.,n(N?N+ Xi)]A(dXl) ...

A(dXn) (dxM)n i

= j~dxM)n E[px1j...Xn(N)]A(dxl) ... A(dxn) (d xM)n

=E[*(N)],

where we have used (11). It remains to show (16). To this end, we write

X(Nf) = _pfl, xn(J\) XN ,...nX*C(NjS(N)

NS(N)! s ( ) x"'...'X*

=i( -!Er(19 1 n (N) n N], Xn (NS(N) - n)! n

where the latter equality follows from the invariance by permutations of (oxit.X n (it) and the fact that hpx.Xn (,u) = O if 6(x1i ... .xXn) = 0.

5. Applications

5.1. Stochastic geometry Stabilizing functionals are widely used in stochastic geometry. This class of functionals was

first introduced in [20] and further developed by Penrose and Yukich (see, for instance, [27] and [28]). Assumption (2) is closely related to assuming that so is stabilizing. The main difference is that in (2) we require that S is a stopping set. Thus, stochastic geometry is a natural field of application of Theorems 1, 2, and 3. In the next two subsections we develop two examples of application in this field.

5.1.1. Cluster in subcritical continuum percolation. Let N = En>i 8(xn,Zn) be an IMHPP on

jRd of intensity A with marks in [0, r], r > 0. Consider the Boolean model

= U B(Xn, Zn). n>1

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 19: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

310 * SGSA C. BORDENAVE AND G. L. TORRISI

Continuum percolation deals with the existence of an infinite connected component in m. It is well known that there exists a critical value of i, say Xc > 0, such that if X < Xc a.s., there are not infinite connected components in 6, and if X > Xc a.s., there is a unique infinite connected component in m (see [23] as a general reference on continuum percolation).

Define by W(N) the connected component (or cluster) of u containing the origin (note that W(N) is possibly empty) and, with a little abuse of notation, define by NIW(N) the restriction of the random measure N to the cluster W (N) (this is indeed an abuse of notation since usually throughout this paper ,ulB denotes the set of points of ,u on B x M). If X < Xc then W(N) is a.s. a compact set. We define (p(N) = l(NIW(N) E A) for some measurable set A C N. It is of general interest to analyze the function

f (X) = E [(p(N)], A E (0, Xc).

In this paragraph we provide a continuous analog of Russo's formula for Poisson point fields; see [35]. More precisely, for ,u E N, define the sets of pivotal points of A by

+(-) {x E (IR'd x [0, r]) \ supp(,u): qp(AL + &x) = 1, q4(gt) = 01

U {x E supp(,u): 0(yt) = 1, q0(t - 8x) = 01

and

Y (tt) = {x E (R d x [0, r]\ supp (t) : 6 (It + ax) = 0, Soyp =1

U {x E supp(it): q(i) = 0, p( -8x) = 11.

The following theorem holds.

Theorem 4. The function f (X) = Pa (NI W(N) E A) is analytic on the interval (0, X,) and

f'(X) = EX[A(Y + (N)) - A(,P-(N))] = X EA[N+?(N)- NP-(N)] for O < A <Xe.

The proof of Theorem 4 is based on Theorems 1 and 2. The main difficulty in applying these theorems to f (X) is that W is not a stopping set. This difficulty can be circumvented as follows. The Minkowski addition is defined by

A D B = {a + b: a e A, b E B}, A, B E K

(see [22] for a complete treatment of the Minkowski operations). Next, Lemma 6, below, provides a stopping set S which satisfies assumptions (2) and (4).

Lemma 6. Define the random compact set S(N) = W(N) ED Br. Then

(i) S is a stopping set such that W(N) (and therefore p(N)) is Fs-measurable;

(ii) for each A > 0, there exists s = s(A) > 0 such that ExI[exp(sIBs(N) I)] < oc.

Proof. We first prove that S is a stopping set. First note that if Xk = (Xk, Zk) E

supp(NIw(N)) then either Xk is a distance at most Zk + Zn from at least one other point

Xn with Xn = (Xn, Zn) E supp(NIW(N)) or Xk is a distance at most Zk from the origin. Note that

W(N) ? (Xn+Bz,) Xn Esupp(NI W(N))

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 20: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA * 311

and, therefore, by the definition of S(N) we have

S(N)( & E (Xn+Bzn))EBr XnEsupp(NIW(N))

Letting I * denote the Euclidean norm in Rd and Ac denote the complement of the set A, we deduce that, for any K E EC,

{S(N) C K}c = {there exits y E KC, Xl, ..., X, e supp(N):

lXi1 < Z1, llXi+j - Xiii < Zi+ Zi+Z , 1 <i <n-i,

|Xn - Yll <_ Zn +r}.

Now, set in the above expression m = min{k E {1. n + 1}: Xk E KCI with the convention

Xn+I = y. Since, by assumption, Zi+1 < r, IIXi+ - XiII < Zi + Zi+I implies that

Xi+X -Xi - < Zi + r; hence, the event {S(N) c K}C can be rewritten as

{S(N) C K}'c = {there exists XI, . . , Xm E supp(N): XI, * * , Xm E K and

lll I < Zi, lIXi -Xi+I II< Zi +Zi+Iq l< i< m-1,

(Xm + BZm+r) n KC = 0).

It follows that {S(N) C K}C E !FK and, thus, S is a stopping set. Now, by construction, NIW(N) is Fs -measurable. We deduce that W (N) is also Fs -measurable and part (i) is proved. It remains to prove part (ii). Define rw(N) = inf{r > 0: W(N) C Br}. Then BS(N) = BrW(N)+r- Thus, part (ii) is a consequence of the exponential decrease of the subcritical cluster; see Section 3.7 and Lemma 3.3 of [23].

Proof of Theorem 4. Clearly, the functional q satisfies assumption (3). Moreover, by Lemma 6, assumptions (2) and (4) are satisfied with S(N) = W(N) @ Br. Thus, by Theorem 1, the function f (.) is analytic on (0, Xe). Note that S is nondecreasing; thus, using (8), it follows that

f'(X) = ExtiS(N)i(1(X E P+(N)) - 1(X E P-(N)))]

= Ex[iS(N)i Ex1[(X E R+ (N)) - 1(X E R-(N)) I N]]

=Ex[IS i S(N)i J(N)x[O,r (1(X E RP(N)) - 1(X E P-(N)))A(dx)]

= Ex[A(,P+(N)) -A(-(N))]q

where X = (q, () is defined in Theorem 2. In particular, the first equality above follows since, given S(N), t is uniformly distributed on S(N) and, therefore, X ? supp(N) a.s. This proves the first equality of the claim. The second equality of the claim can be proved similarly, using (9).

5.1.2. Typical cell of Poisson-Voronoi tessellation. Let A C Rd be a locally finite point set and let l1A = YaEA Sa. The Voronoi cell with respect to A with nucleus y E A is by definition

C(y, AUA) = {X E Rd: ||x -Y||I < l|x -a||I for all a E Al.

Let N = En>1 Ixn be a Poisson point process on Rd of intensity X. The Poisson-Voronoi cell with nucleus Xk is by definition the random convex set C(Xk, N) (see, for instance, [31]). Let

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 21: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

312.* SGSA C. BORDENAVE AND G. L. TORRISI

Xd be the set of convex bodies of Rd containing the origin, equipped with the Hausdorff metric 0 and the related Borel a-field. Moreover, let No be the point process obtained by N adding a point at the origin. The typical Poisson-Voronoi cell is defined by C (0, No). This cell is called typical since, by Slivnyak's theorem (see, for instance, [31]), P?(C(0, N) e A) = PA(C(0, No) E A), where Po is the Palm version of P. and A is a Borel set of XO.

Let (P be a measurable functional from XOd to IR. Define o(N) =((C(0, No)) and 0

f(X) = Ex.[((C(O, No))] = ExbJo(N)].

The Voronoi flower V(N) is the union of the closed balls that have the origin and d points of N on their boundary, and no points of N inside. It is known that the centers of the balls which form V(N) are the vertices of the typical Poisson-Voronoi cell. Then V is a stopping set and ,p(N) = 4(C(0, No)) is !Fv-measurable (see, for instance, [36]). It is also known that BV(N) satisfies (4) for all X > 0; indeed, by Lemma 1 and Remark 5 of [13], it follows that, for all X > 0,

PX(tBv(N)l > 2 -dt) < exp(-cdt) foreacht > 0,

for some positive constant Cd depending only on the dimension d. Furthermore, it can be easily realized that the mapping V is monotone nonincreasing. Hence, our results can be applied to f(X) provided that Ex[I(p(C(0, No))Iy] < oo for some y = y(X) > 2.

Note that this latter condition holds if ~o(N) = (P(C(0, No)) = 1(C(0, No) E A) for some measurable set A C X%. In particular, in this case the following analog of Theorem 4 holds. For each ,u E XA such that 0 E supp(,uL), consider the following sets of pivotal points of the measurable set A C X

eP+(k) = {X E Rd \supp(,(): C(0,LL+8x) E A, C(O,Li) ? A}

U {x E supp(,u): C(O, gt) E A, C(O, t-8x) 0 A}

and

,P-(7 ) = {x e Rd \ supp(p): C(O, ?t + 8x) A, C(O, g) E A}

U {x E supp(,u): C(O, i) 0 A, C(O, i-Sx) E A}.

The following theorem holds.

Theorem 5. The function f(X) = P (C(0, No) E A) is analytic on (0, oo) and

1 f'(X) = Ex[IJP+(N)I - IY(N)I] = -Ej[NSP+(N) -Np-(N)I, X > 0.

The proof is similar to that of Theorem 4 and therefore omitted.

5.2. Insurance

In this subsection we apply our results to risk processes described in terms of Poisson shot noise and compound Poisson processes. The former have been introduced in [17] and [18] to model delayed claims, the latter correspond to the classical Cramer-Lundberg model (see, for instance, [2]). The main results of this subsection are Theorems 6 and 7. Under suitable light tailed conditions on the claims, Theorems 6 and 7 respectively provide closed-form formulae for the nth-order derivative of the ruin probability of risk processes with delayed (and undelayed) claims, and an efficient Monte Carlo estimator for the first-order derivative of the ruin probability

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 22: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA.* 313

of the classical Cramer-Lundberg model. The estimator proposed in Section 5.2.2 is alternative to that of [3] (see Remark 2).

Now we briefly recall the notion of an asymptotically optimal estimator, which will be considered in this subsection. Let z(u) be a positive function such that z(u) -* 0 as u -x00. To obtain an asymptotically efficient estimator of z(u), we look for an unbiased estimator ru of z(u) whose relative error is asymptotically bounded. In the following we focus on a weaker

concept of efficiency. We say that rU is asymptotically optimal (as u -0oo) if

lo g VE [iKu2] liminf g > 1 u-0oo log z(u)

(see [2] and [3]). All the random variables considered in this subsection are defined on a measurable space

(Q, F). Here we consider marked point processes on [0, oo) with marks in [0, oo). We endow (Q, F) with the family of probability measures {P; }; >o such that, under Pa, XI < X2 < ... are the points of a homogeneous Poisson process on [0, oo) with intensity X > 0, and {Zn }n>I are i.i.d. nonnegative random variables with distribution Q, and independent of the Poisson process.

We denote by N the IMHPP pn >I 8(Xn,Zn), by Nt the number of points of N on [0, t] x [0, oo), by Nlt the set of points of supp(N) on [0, t] x [O, oo), and by Ex the expectation with respect to PX.

5.2.1. Derivatives of the ruin probability of risk processes with delayed claims. Consider the following risk model. Let u - Y(t) be the surplus of the insurance portfolio described by the shot noise process with drift

Y(t) = ,3 H(t - Xn, Zn) 1(0,t](Xn) - ct, t > 0. n>I

Here u > 0 is the initial capital, c > 0 is the premium density (which is assumed to be constant), and H: ER x [0, oo) -* [0, oo) is a nondecreasing, continuous function such that H(t, z) = 0 for t < 0. Throughout this subsection, we assume that H(oo, z) = z and Pa (ZI > 0) > 0. Since the law of Z1 under PA does not depend on A, from now on, for a measurable function g, we set Ej[g(Zj)] = E[g(Z1)].

Note that the function H models the delay in claim settlement in the sense that the insurance company honors a claim at time Xn paying the quantity H (t - Xn, Zn) at time t. The associated ruin probability is defined by the quantity

fu(X) = Px(Tu(N) < 0), u > 0,

where Tu(N) =inf{t > 0: Y(t) > u}, Tu(N) = oo if {.**J = 0,

is the ruin time. Bremaud [7] proved that under the assumptions

K(O) = E[exp(OZi)] < oo for all 0 in a neighborhood of 0, say (0, tj) with r1 < o0 (40)

and c > XE[Z1], (41)

it holds that

fu (X) < e-Wu for all u > 0

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 23: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

314 * SGSA C. BORDENAVE AND G. L. TORRISI

and

lim -log(X) =w, u -*cu

where w (called the Lundberg parameter) is the unique positive 0 of the function

A(O) =X(K(O) - 1) -CO.

(Note that the function A should not be confused with the intensity measure A considered in the previous sections. In the remaining part of the paper, the symbol A will not be used anymore to denote the intensity measure). Thus, under (40) and (41), the event {TU (N) < 00o is rare as u -. o0 and this yields problems if we want to estimate fu (X) by an efficient Monte Carlo simulation (we refer the reader to [9] for an introduction to rare event simulations). Such difficulties can be overcome using importance sampling. Define the stochastic process

C(t) = Zn 1(O,t](Xn), n>1

and consider the family of laws {P 0I: K(0)< defined as follows: the probability measure Po is absolutely continuous with respect to the original law PA on the a-field F[O,t] for each t > 0 and the corresponding density is

00P eC (t)

teP = Ex[eOC(t)] = exp(OC(t) - Jt(K(0) - 1)).

We point out (see, = [eO C(t) We point out (see, for instance, [2]) that, under Po, the process {Xn,n>l is a homogeneous Poisson process with intensity XK(O), independent of the sequence {Zn}n>l of i.i.d. random

variables, whose common law Q0 is absolutely continuous with respect to their common law Q under Pa, with density

dQo e_z (z)= dQ K(O)

Throughout this subsection, we denote by Eo the expectation under Po. Furthermore, since the law of ZI under Po does not depend on X, for a measurable function g, we set Eo [g(Zi)] = EO [g(Z1)].

The following result can be found in [33].

Proposition 5. ([33].) Assume that (40) and (41) hold. Then P' (Tu(N) < 00) = 1 for all u > 0 and, under P',

rU()=fTU (N)

is an asymptotically optimal estimator of fu (X).

Let xn = (xn, Zn) E (0, 00) x (0, 00), n > 1. For each locally finite counting measure

/t = YZn>1 8xn, define the functionals

(Po (i,) =exp (-0 E Zn l (0 T. (P) I (xn) + ? (K (0) - 1) Tu (A) 0 < 0< W,

n>1

and

w (Mt)= exp (-w (EZn l(0,Tu(t)(Xn )-cTu(i))) n>1

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 24: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA . 315

Moreover, we consider the functionals q?w,x1,...,Xn (it) and Pwl x which are respectively defined by (10) and (11) with 5?w in place of q.

The following theorem provides closed-form expressions for the nth-order derivatives of the ruin probability. As usual, we use the standard convention that the sum over an empty set is 0

andthatk!/(k-n)! = O forn > k.

Theorem 6. Under the assumptions of Proposition 5, we find that, for a fixed u > 0, the function fu (.) is analytic in a neighborhood of X and that, for all n > 1,

f (n) (- (K (W) - 1)n Ew[(TU (N))n pw (N)] = Ew[(Tu(N))nsyWx1.x (N)]

=E4 (NTNX)) . Xl(N)] NTu(N)! Xl.'X

= Ew (NTu(fN) ! 1n ( N ] (NTu (N) -n)! Xn 'Pw.

where, for 1 < i < n, Xi = (ti, i), XI = (t, (/), and X" = (t", (") are random variables on (0, oo) x (0, oo). Given Tu (N), (i)1 <i<n are independent and uniformly distributed on [0, Tu (N)], and independent of N; ((i) I <i <n are independent, independent of N and (W) <i <n, and have law Qw. Given the collection of points NITu(N), (X')1<i<n are independent and uniformly distributed on the collection; {Xj', ..., X"I is uniformly distributed on the set of subsets of n distinct points of NI Tu (N).

To prove Theorem 6, we need Lemmas 7 and 8, below. Here we consider the notion of a large deviation principle for which we refer the reader to [12].

Lemma 7. Assume that (40) holds. If, moreover, the function 0 X K (0), 0 E (O, rj), is steep, namely limnO, K'(0n) = oo whenever {[On} is a sequence converging to ij, we find that the stochastic process {Y(t)/tIt>o satisfies a large deviation principle with rate function A*(x) = supOR(0x - A(0)).

The proof of Lemma 7 can be found in [21] (see Proposition 3.1 therein).

Lemma 8. Under the assumptions of Proposition 5, we find that, for a fixed u > 0 and 0 E (O, w] such thatXK'(0)-c > 0, there exists s = s(X) > 0 such thatEo [exp(sTu (N))] < oo.

Proof In this proof we write Tu in place of Tu (N). Since

K-(() = E0[exp(aZj)] = ( +0) 0 ~~~~~K (0)

by the assumptions, it follows that K-(a) < 00 for a E (0, i - 0) and that the function a F-* K-(U) is steep. Therefore, by Lemma 7, the stochastic process {Y(t)/tlt>o satisfies a

large deviation principle with respect to Po with rate function A (x) = supOER (Ox - 0 where A6(0) = XK(0)(K-(0) - 1) - CO. Since XK'(0) - c > 0, by assumption we can choose , E (0, K'(0)) such that y = X, - c > 0. By the large deviation principle of {Y(t)/t}t>o with respect to Po and the regularity properties of the rate function A* (), we have

1 - P (Yt) t-oliM

- log P~ t = -A'~(y). (42)

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 25: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

316 * SGSA C. BORDENAVE AND G. L. TORRISI

Moreover, for any u > 0, we find that there exists t1 = t1 (u, y) such that

Po(Tu > t) < Po(Y(t) < u) < P () < for allt > t1. (43)

Therefore, by (42) and (43), it follows that, for any 8, u > 0, there exists t = t(8, u, y) such that

Po(TU > t) < exp(-(A (y) - E)t) for all t > t. (44)

Now, take 0 < s < A*(y) - E. The conclusion follows noticing that by (44) we have

E [exp(sTu)] = 1 +sj estPo(Tu > t)dt

< eSt + s J exp(-((A (y) - r) - s)t) dt t

< 00.

Proof of Theorem 6. We start by noting that, by the properties of the function 0 Av(0), 0 E (0, i), there exists a strictly increasing sequence {OkI converging to w such that Ok E (0, w) and XK'(Ok) - C > 0. By the implicit function theorem, the function X ~-+ w(X) is continuous. Therefore, for each k, there exists a neighborhood of i, say Ik = (i - Ek, i + Ek), such that, for all i' E Ik, we have Ok < w(X') and )JKV(Ok) - C > 0. We note that, since Ok E (0, w) is such that XK'(Ok) - C > 0, it holds that POk (Tu (N) < ox) = 1 (see Lemma 3.2 of [33] for details). Therefore,

(i) = Ex[1(Tu (N) < )] = EOk [qQk (N, A)].

Note that [0, T ] is a stopping set. Furthermore, the functional qok (N, A) is 3[o,Tu]-measurable and absolutely monotonic in X (see Remark 1 for the definition of an absolutely monotonic function). Also, note that A (0) < 0 for each 0 E [0, w]. Therefore, by the definition of Tu (N) and the assumption H(t, z) / z as t / oo, we find that, for each u,

qoo(N, X) < e-OU and tpw(N) < e-'u.

In particular, this implies that the functional po,k (N, X) is bounded. Therefore, by Lemma 8 and Remark 1, it follows that fu (.) is analytic on Ik. Consider the functions

Fk(x, y) = EOk[pok(N, y)], (x, y) E Ik X Ik

Using obvious notation, we will show later that

ax'Fk(x, y) = E k[(Tu(N))flnpok,xl _.x(N, y)], n > 1 (45)

and

anFk(x, y) = (K(0k) - l)n Exk[(Tu(Nf))fPok(N, y)], n > 1. (46)

Therefore,

f (n) (i)-(K (0k) - )n EOk [(Tu (N)) npo9k (N, )]

= EOJk [(Tu(N))fl~pk,X (N,X)], n > 1.

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 26: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA * 317

We now show that

n Ek[(Tu(N))npk,xl . ,Xn (N, i)] = E; [(Tu (N))n Pw,X , .X (N)]. (47) k-*oo

Using the exponential tilting, we have

EOk [(TU (N))nspok,X ,.x (N, EA)] [(Tu (N))n wsX' .xn (N)]l

< Ew [(Tu(N))nI exp(-(w -Ok)(C(Tu(N)) - cTu(N)) - A(0k)Tu(N)) X 400k,X , X n (N, A-pw, X..., xn (N)I] (48)

Note that the argument of the mean in (48) is less than or equal to (2Tu(N))n(exp(-A(Ok) x Tu (N)) + 1). By Lemma 8, there exists s = s (X) > 0 such that Ew [exp(s Tu (N))] < 00. Fix 8 E (0, s) and choose k = k(s) such that, for all k > k, it holds that 0 < -A(0k) < 8 < s (such a k exists since liMk,O A(0k) = A (w) = 0). Then, for all k > k, the argument of the

mean in (48) is less than or equal to

(2Tu (N))n (exp(8Tu (N)) + 1),

which is integrable under Pw since Ew [exp(sTu(N))] < oo. Thus, (47) follows by the dominated convergence theorem. The limit

lim EOk [(Tu (N))n pOk (N, iN)] =)] k- oo

can be proved similarly. This shows the first equality in the statement. The other equalities can be proved as in Proposition 4 and Theorem 3.

It remains to show that Fk(., -) has partial derivatives (45) and (46). Equality (45) follows by Proposition 2. Indeed, by Lemma 8, for each x E Ik, there exists s = s(x) > 0 such that

Eok[exp(sTu (N))] < o0, and the mapping Su := [0, Tu] is nonincreasing. We now show (46) with n = 1. The general case follows along similar lines, reasoning by induction. If we justify the interchange between the sign of limit and the sign of mean in the expression,

Ek%[(pok(N, y + h) - (Pok(N, y)] h--->O+ h

then the right-hand derivative equals the right-hand side of (46). In fact, we can pass the limit into the sign of the expectation in that a straightforward computation gives

ook(N, y+h) -)ook(N,y) < (K(k) - 1)Tu(N)exp(h(K(0k) - 1)Tu(N)). h

Here, again by Lemma 8, the right-hand side of the above inequality is integrable under P Ok and, therefore, we can apply the dominated convergence theorem. Similarly, we can show that the left-hand derivative equals the right-hand side of (46). This completes the proof.

5.2.2. Classical risk processes: an efcient Monte Carlo algorithm for thefirst-order derivative of the ruin probability. The classical risk model is defined by the surplus u -Y (t) of the insurance portfolio described by the following compound process with drift:

Y(t) = , Zn 1(O,t](Xn) - ct. n>1

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 27: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

318 * SGSA C. BORDENAVE AND G. L. TORRISI

The interpretation of the quantities in the above formula is exactly as in the previous subsection.

Moreover, we consider the same statistical assumptions and the same notation (clearly, the ruin probability fi (X) and the ruin time Tu (N) are now defined with respect to the classical risk process). For the Cramer-Lundberg model, it is well known that, under the assumptions (40)

and (41), we have imeWU C2w (c -X~E[ZI]

lim - I(.) = AA' 2v(49) u -*>CO Uu, X.(XK'(W) -C2

where w is the unique positive 0 of the function A (.) (see Proposition 9.4 of [2]). Moreover, note that in the case of classical risk processes the corresponding Theorem 6 can be proved along similar lines, and provides the following unbiased estimator of fuP (X) under Pw:

Sui(N) = (K(W) - 1)Tu(N)(pw(N) + Tu(N)((pw(N + Sx) - (pw(N)).

Here, X = (t, () is a random variable on (0, oo) x (0, oo); given Tu(N), t is uniformly distributed on [0, Tu (N)], and independent of N; ( is independent of N and $, and has law Qw.

The following theorem holds.

Theorem 7. Assume that (40) and (41) hold. Then su (N) is an asymptotically optimal estimator

of fu(X) as u -- oc under the law Pw.

Proof. We only need to prove that

log EA[(SU(N))2] lim inf > 1. (50) u--*00 log fu(A)

For any (locally finite) counting measure It on (0, oc) x (0, oc) and u > 0, we find that

pw(G) < e-wu; thus

Is^U(N)J < (K(W) + 1)e-wuTu(N)

and, therefore, E?[(^S (N))2] < (K (W) + 1)2e2wu EA4Tu (N)2]. (51)

Denote by {X i} the interarrivals of the Poisson process {Xi }. Under Pw, n7-1 (Z -CcXi) is a random walk with positive drift; indeed

w ~ ~~,K'(W) - C

ExjlZi - cXCi = Kw >0.

Furthermore, Tu (N) is the hitting time of this random walk. Therefore, by the results in [15], it follows that

Ew[TU(N)2] = O(u2) as u --> oo. (52)

Finally, (50) follows by (49), (51), and (52). While it is tempting to conjecture that a similar optimality result holds for risk processes

with delay in claim settlement, we do not have a proof of this claim.

Remark 2. Note that, under the assumptions of Theorem 7, Asmussen and Rubinstein [3] (see also [2]) proved that

u (N) = (TT(N) - T(N)) exp(-w (C(Tu (N)) - cT (N)))

is asymptotically optimal for f' (X) under Pw. The estimator su (N) is alternative to au (N).

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 28: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

Monte Carlo methods for sensitivity analysis SGSA* 319

Acknowledgements

The authors thank the Editor, two anonymous referees, and Nicolas Privault for a careful

reading of the paper and many valuable remarks.

References

[1] Albeverio, S., Kondratiev, Y. G. and R?ckner, M. (1996). Differential geometry of Poisson spaces. C. R.

Acad. Sel Paris S?r. 7323, 1129-1134.

[2] Asmussen, S. (2000). Ruin Probabilities. World Scientific, Singapore. [3] Asmussen, S. and Rubinstein, R. Y (1999). Sensitivity analysis of insurance risk models. Manag. Sei. 45,

1125-1141.

[4] Baccelli, F. and Br?maud, P. (1993). Virtual customers in sensitivity and light traffic analysis via Campbell's formula for point processes. Adv. Appl. Prob. 25,221-234.

[5] Baccelli, F., Hasenfuss, S. and Schmidt, V. (1999). Differentiability of functionals of Poisson processes via

coupling with applications to queueing theory. Stock Process. Appl. 81,299-321.

[6] Blaszczyszyn, B. (1995). Factorial moment expansion for stochastic systems. Stock Process. Appl. 56, 321-335.

[7] Br?maud, P. (2000). An insensitivity property of Lundberg's estimate for delayed claims. J. Appl. Prob. 37, 914-917.

[8] Br?maud, P. and Vazquez-Ab ad, F. J. (1992). On the pathwise computation of derivatives with respect to the rate of a point process: the phantom RPA method. Queueing Systems Theory Appl. 10, 249-270.

[9] Bucklew, J. A. (2004). Introduction to Rare Event Simulation. Springer, New York.

[ 10] Daley, D. J. and Vere-Jones, D. (2003). An Introduction to the Theory of Point Processes. Springer, New York.

[11] Decreusefond, L. (1998). Perturbation analysis and Malliavin calculus. Ann. Appl. Prob. 8,496-523.

[12] Dembo, A. and Zeitouni, O. (1998). Large Deviations Techniques and Applications. Springer, New York.

[13] Foss, S. and Zuyev, S. (1996). On a Voronoi aggregative process related to a bivariate Poisson process. Adv.

Appl. Prob. 28,965-981.

[14] Glassermann, P. (1990). Gradient Estimation via Perturbation Analysis. Kluwer, Dordrecht.

[15] Gut, A. (1974). On the moments and limit distributions of some first passage times. Ann. Prob. 2,277-308.

[16] Ho, Y C. and Cao, X. R. (1983). Perturbation analysis and optimization of queueing networks. /. Optimization

Theory Appl. 40, 559-582.

[17] Kl?ppelberg, C. and Mikosch, T. (1995). Delay in claim settlement and ruin probability approximations. Scand. Actuarial]. 2,154-168.

[18] Kl?ppelberg, C. and Mikosch, T. (1995). Explosive Poisson shot noise processes with applications to risk reserves. Bernoulli 1,125-147.

[19] L'Ecuyer, P. (1990). A unified version of the IPA, SF, and LR gradient estimation techniques. Manag. Sei. 36, 1364-1383.

[20] Lee, S. (1997). The central limit theorem for Euclidean minimal spanning trees. Ann. Appl. Prob. 7,996-1020. [21] Macci, C, Stabile, G. and Torrisi, G. L. (2005). Lundberg parameters for non standard risk processes. Scand.

Actuarial J. 6,417^32.

[22] Matheron, G. (1975). Random Sets and Integral Geometry. John Wiley, New York.

[23] Meester, R. and Roy, R. (1996). Continuum Percolation. Cambridge University Press.

[24] Molchanov, I. and Zuyev, S. (2000). Variational analysis of functionals of Poisson processes. Math. Operat. Res. 25,485-508.

[25] M0LLER, J. and Waagepetersen, R. P. (2003). Statistical Inference and Simulation for Spatial Point Processes.

Chapman & Hall/CRC, Boca Raton, FL.

[26] Penrose, M. (2003). Random Geometric Graphs. Oxford University Press.

[27] Penrose, M. and Yukich, J. (2001). Limit theory for random sequential packing and deposition. Ann. Appl. Prob. 12,272-301.

[28] Penrose, M. and Yukich, J. (2003). Weak laws of large numbers in geometric probability. Ann. Appl. Prob.

13,277-303. [29] Reiman, M. I. and Simon, B. (1989). Open queueing systems in light traffic. Math. Operat. Res. 14, 26-59.

[30] Reiman, M. I. and Weiss, A. (1989). Sensitivity analysis for simulations via likelihood ratios. Operat. Res. 37, 830-844.

[31] Stoyan, D., Kendall, W. S. and Mecke, J. (1995). Stochastic Geometry and Its Applications. John Wiley, Chichester.

[32] Suri, R. and Zazanis, F. M. (1988). Perturbation analysis gives strongly consistent sensitivity estimates for the M/G/l queues. Manag. Sei. 34, 39-64.

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions

Page 29: Monte Carlo Methods for Sensitivity Analysis of Poisson-Driven Stochastic Systems, and Applications

320.* SGSA C. BORDENAVE AND G. L. TORRISI

[33] ToRRisi, G. L. (2004). Simulating the ruin probability of risk processes with delay in claim settlement. Stoch. Process. Appl. 112, 225-244.

[34] Zazanis, F M. (1992). Analyticity of Poisson-driven stochastic systems. Adv. Appl. Prob. 24, 532-541.

[35] Zuyev, S. (1993). Russo's formula for Poisson point fields and its applications. Discrete Math. Appl. 3,63-73. [36] Zuyev, S. (1999). Stopping-sets: gamma-type results and hitting properties. Adv. Appl. Prob. 31, 355-366.

This content downloaded from 195.34.79.223 on Thu, 12 Jun 2014 20:17:08 PMAll use subject to JSTOR Terms and Conditions