dynamic mechanism design 1 - learning filealex gershkov learning and implementation august 2018 4 /...

36
Dynamic Mechanism Design 1 - Learning Alex Gershkov August 2018 Alex Gershkov () Learning and Implementation August 2018 1 / 27

Upload: others

Post on 18-Oct-2019

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Dynamic Mechanism Design 1 - Learning

Alex Gershkov

August 2018

Alex Gershkov () Learning and Implementation August 2018 1 / 27

Page 2: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Modern Revenue Management

Starts with US Airline Deregulation Act of 1978.

Today mainstream business practice (airlines, trains, hotels, carrentals, holiday resorts, advertising, intelligent metering devices, etc..)

Considerable gap between practitioners and academics in the field.

Major academic textbook : The Theory and Practice of RevenueManagement by K. T. Talluri and G.J. van Ryzin

Alex Gershkov () Learning and Implementation August 2018 2 / 27

Page 3: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Towards a Modern Theory of RM

Necessary blend of

1 The elegant dynamic models from the OR, MS, CS , Econ (search)literatures with historical focus on "grand, centralizedoptimization" and/or "ad-hoc", intuitive mechanisms.

2 The rich, classical mechanism design literature with historical focuson information/incentives in static settings.

Blend fruitful for numerous applications.

Alex Gershkov () Learning and Implementation August 2018 3 / 27

Page 4: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Reading

Bergemann, Valimaki (2010) “The Dynamic Pivot Mechanism,”Econometrica 78, 771 - 789

Athey, Segal (2013) “An Effi cient Dynamic Mechanism,”Econometrica 81 (6), 2463 —2485.

Gershkov, Moldovanu (2009) “Learning about the Future andDynamic Effi ciency,”American Economic Review, Vol. 99 (4),1576-1587

Gershkov, Moldovanu (2012) “Optimal Search, Learning andImplementation,”, Journal of Economic Theory, Vol 147, 881-909.

Alex Gershkov () Learning and Implementation August 2018 4 / 27

Page 5: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Illustration 1

one object

two agents

agents arrive sequentially, one per period

each agent can only be served upon arrival

after an item is assigned, it cannot be reallocated

valuations xi are private

valuations distribute independently and uniformly on [0, 2]

Alex Gershkov () Learning and Implementation August 2018 5 / 27

Page 6: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Illustration 1

The dynamically effi cient allocation is

first agent gets the object if x1 ≥ 1second agent gets the object if x1 < 1

ImplementationThe payment scheme is:

for the first player:

P1 (x1) ={1 if x1 ∈ [1, 2]0 if x1 /∈ [1, 2]

for the second player: P2 (x2) = 0

Alex Gershkov () Learning and Implementation August 2018 6 / 27

Page 7: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Illustration 1

The dynamically effi cient allocation is

first agent gets the object if x1 ≥ 1second agent gets the object if x1 < 1

Implementation

The payment scheme is:

for the first player:

P1 (x1) ={1 if x1 ∈ [1, 2]0 if x1 /∈ [1, 2]

for the second player: P2 (x2) = 0

Alex Gershkov () Learning and Implementation August 2018 6 / 27

Page 8: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Illustration 1

The dynamically effi cient allocation is

first agent gets the object if x1 ≥ 1second agent gets the object if x1 < 1

ImplementationThe payment scheme is:

for the first player:

P1 (x1) ={1 if x1 ∈ [1, 2]0 if x1 /∈ [1, 2]

for the second player: P2 (x2) = 0

Alex Gershkov () Learning and Implementation August 2018 6 / 27

Page 9: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Illustration 1

The dynamically effi cient allocation is

first agent gets the object if x1 ≥ 1second agent gets the object if x1 < 1

ImplementationThe payment scheme is:

for the first player:

P1 (x1) ={1 if x1 ∈ [1, 2]0 if x1 /∈ [1, 2]

for the second player: P2 (x2) = 0

Alex Gershkov () Learning and Implementation August 2018 6 / 27

Page 10: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Illustration 2

one object

two agents

agents arrive sequentially, one per period

each agent can only be served upon arrival

the designer does not know the distribution of values

with probability 0.5 the distribution of both agents’types is uniformon [0, 1]

with probability 0.5 the distribution is uniform [1, 2]

Alex Gershkov () Learning and Implementation August 2018 7 / 27

Page 11: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Illustration 2

Value of keeping vs. value of allocating

0 1 20

1

2

x

Alex Gershkov () Learning and Implementation August 2018 8 / 27

Page 12: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Illustration 2

The dynamically effi cient allocation is

first agent gets the object if x1 ∈ [0.5, 1] ∪ [1.5, 2]second agent gets the object if x1 /∈ [0.5, 1] ∪ [1.5, 2]

Implementation

The payment scheme for the first agent (P, p) should satisfy:

x1 + P ≥ p for any x1 ∈ [0.5, 1]x1 + P ≤ p for any x1 ∈ [1, 1.5]

But this is IMPOSSIBLE!

Alex Gershkov () Learning and Implementation August 2018 9 / 27

Page 13: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Illustration 2

The dynamically effi cient allocation is

first agent gets the object if x1 ∈ [0.5, 1] ∪ [1.5, 2]second agent gets the object if x1 /∈ [0.5, 1] ∪ [1.5, 2]

Implementation

The payment scheme for the first agent (P, p) should satisfy:

x1 + P ≥ p for any x1 ∈ [0.5, 1]x1 + P ≤ p for any x1 ∈ [1, 1.5]

But this is IMPOSSIBLE!

Alex Gershkov () Learning and Implementation August 2018 9 / 27

Page 14: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Illustration 2

The dynamically effi cient allocation is

first agent gets the object if x1 ∈ [0.5, 1] ∪ [1.5, 2]second agent gets the object if x1 /∈ [0.5, 1] ∪ [1.5, 2]

Implementation

The payment scheme for the first agent (P, p) should satisfy:

x1 + P ≥ p for any x1 ∈ [0.5, 1]x1 + P ≤ p for any x1 ∈ [1, 1.5]

But this is IMPOSSIBLE!

Alex Gershkov () Learning and Implementation August 2018 9 / 27

Page 15: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Illustration 2

The dynamically effi cient allocation is

first agent gets the object if x1 ∈ [0.5, 1] ∪ [1.5, 2]second agent gets the object if x1 /∈ [0.5, 1] ∪ [1.5, 2]

Implementation

The payment scheme for the first agent (P, p) should satisfy:

x1 + P ≥ p for any x1 ∈ [0.5, 1]x1 + P ≤ p for any x1 ∈ [1, 1.5]

But this is IMPOSSIBLE!

Alex Gershkov () Learning and Implementation August 2018 9 / 27

Page 16: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Model

m items, each item i is characterized by a "quality" qi with0 ≤ q1 ≤ q2 ≤ · · · ≤ qmn agents arrive sequentially, one agent per period

each agent can only be served upon arrival

after an item is assigned, it cannot be reallocated

past assignments and actions are observed by everyone

each agent is characterized by a "type" xj , if an item with type qi ≥ 0is assigned to an agent with type xj , this agent enjoys a utility of qixjitems’qualities qi are known

the agents’types are iid random variables Xi on [0,+∞) withcommon c.d.f. F

Alex Gershkov () Learning and Implementation August 2018 10 / 27

Page 17: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 1 - Complete Information & Known Distribution

Theorem (DLR, 1972)Consider the arrival of an agent with type x in period k ≥ 1. There existk + 1 constants 0 = a0,k ≤ a1,k ≤ · · · ≤ ak ,k = ∞ such that

the dynamically effi cient policy assigns the item with the i’th smallesttype if x ∈ (ai−1,k , ai ,k ]the constants are given recursively by

ai ,k+1 =∫ ai ,k

ai−1,kxdF (x) + ai−1,kF (ai−1,k ) + ai ,k [1− F (ai ,k )]

where we set +∞ · 0 = −∞ · 0 = 0.

Alex Gershkov () Learning and Implementation August 2018 11 / 27

Page 18: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 2 - Incomplete Information & Known Distribution

Social Choice Functions

The history at period k , Hk consists of◦ the ordered set of all signals, reported by the agents◦ the ordered set of objects allocated to those agentsat periods n, · · ·, k + 1. Denote by Hk the set of all feasible historiesat k.

An allocation policy is deterministic if, at any k, and for any possibletype of agent arriving at k , it uses an allocation rule which is nonrandom.

Direct mechanism

φk : Hk × [0,∞) −→ Π0

Pk : Hk × [0,∞) −→ R

Alex Gershkov () Learning and Implementation August 2018 12 / 27

Page 19: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 2 - Incomplete Information & Known Distribution

Social Choice Functions

The history at period k , Hk consists of◦ the ordered set of all signals, reported by the agents◦ the ordered set of objects allocated to those agentsat periods n, · · ·, k + 1. Denote by Hk the set of all feasible historiesat k.

An allocation policy is deterministic if, at any k, and for any possibletype of agent arriving at k , it uses an allocation rule which is nonrandom.

Direct mechanism

φk : Hk × [0,∞) −→ Π0

Pk : Hk × [0,∞) −→ R

Alex Gershkov () Learning and Implementation August 2018 12 / 27

Page 20: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 2 - Incomplete Information & Known Distribution

Social Choice Functions

The history at period k , Hk consists of◦ the ordered set of all signals, reported by the agents◦ the ordered set of objects allocated to those agentsat periods n, · · ·, k + 1. Denote by Hk the set of all feasible historiesat k.

An allocation policy is deterministic if, at any k, and for any possibletype of agent arriving at k , it uses an allocation rule which is nonrandom.

Direct mechanism

φk : Hk × [0,∞) −→ Π0

Pk : Hk × [0,∞) −→ R

Alex Gershkov () Learning and Implementation August 2018 12 / 27

Page 21: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Implementable SCF

TheoremA policy φk is implementable if and only if it partitions the type-space ofthe agents into intervals. That is,

φk (Hk , x) = q(j) if x ∈ [yj−1 (Hk ) , yj (Hk ))

where 0 = y0 (Hk ) ≤ y1 (Hk ) ≤ y2 (Hk ) ≤ · · · ≤ yk (Hk ) = ∞.The payment scheme is

Pk (Hk , x) =j

∑i=2(q(i ) − q(i−1))yi−1 (Hk )

if xk ∈ (yj−1 (Hk ) , yi (Hk )].

CorollaryThe first-best policy is implementable also under incomplete information.

Alex Gershkov () Learning and Implementation August 2018 13 / 27

Page 22: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 3 - Complete Information & Learning

agents know their type

types are observable by the designer

F is unknown

beliefs about F :

originally given by Φ = Φn ,updated after observing agents’types Φm−1 (xn , · · ·, xm)

Denote by χk the part of the history Hk that includes only the reportedsignals

Alex Gershkov () Learning and Implementation August 2018 14 / 27

Page 23: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 3 - Complete Information & Learning

Theorem (Albright, 1977)Consider the arrival of an agent with type x in period k ≥ 1.

There exist k + 1 functions 0 = a0,k (χk , x) ≤ a1,k (χk , x) ≤a2,k (χk , x) ≤ · · · ≤ ak ,k (χk , xk ) = ∞ such that the dynamicallyeffi cient policy assigns the item with the i’th smallest type if x ∈(ai−1,k (χk , x) , ai ,k (χk , x)],

These functions satisfy

ai ,k+1(χk+1, xk+1) =∫Ai ,kxkdF̃k (xk |χk+1, xk+1)

+∫Ai ,kai−1,k (χk , xk )dF̃k (xk |χk+1, xk+1) +

∫Ai ,kai ,k (χk , xk )dF̃k (xk |χk+1, xk+1)

where Ai ,k = {xk : xk ≤ ai−1,k (χk , xk )},Ai ,k = {xk : ai−1,k (χk , xk ) < xk ≤ ai ,k (χk , xk )} andAi ,k = {xk : xk > ai ,k (χk , xk )}.

Alex Gershkov () Learning and Implementation August 2018 15 / 27

Page 24: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 3 - Complete Information & Learning

Theorem (Albright, 1977)Consider the arrival of an agent with type x in period k ≥ 1.

There exist k + 1 functions 0 = a0,k (χk , x) ≤ a1,k (χk , x) ≤a2,k (χk , x) ≤ · · · ≤ ak ,k (χk , xk ) = ∞ such that the dynamicallyeffi cient policy assigns the item with the i’th smallest type if x ∈(ai−1,k (χk , x) , ai ,k (χk , x)],

These functions satisfy

ai ,k+1(χk+1, xk+1) =∫Ai ,kxkdF̃k (xk |χk+1, xk+1)

+∫Ai ,kai−1,k (χk , xk )dF̃k (xk |χk+1, xk+1) +

∫Ai ,kai ,k (χk , xk )dF̃k (xk |χk+1, xk+1)

where Ai ,k = {xk : xk ≤ ai−1,k (χk , xk )},Ai ,k = {xk : ai−1,k (χk , xk ) < xk ≤ ai ,k (χk , xk )} andAi ,k = {xk : xk > ai ,k (χk , xk )}.Alex Gershkov () Learning and Implementation August 2018 15 / 27

Page 25: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 4 - Incomplete Information & Learning

TheoremA policy φk is implementable if and only if it partitions the type-space ofthe agents into intervals.

That is,

φk (Hk , x) = q(j) if x ∈ [yj−1 (Hk ) , yj (Hk ))

where 0 = y0 (Hk ) ≤ y1 (Hk ) ≤ y2 (Hk ) ≤ · · · ≤ yk (Hk ) = ∞.

Alex Gershkov () Learning and Implementation August 2018 16 / 27

Page 26: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 4 - Incomplete Information & Learning

Question: When do cutoffs exist which are

1 independent of the signal of the current agent2 replicate the effi cient allocation?

Alex Gershkov () Learning and Implementation August 2018 17 / 27

Page 27: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 4 - Incomplete Information & Learning

Example

The distribution of types is uniform on [0,W ]

beliefs about W are given by a Pareto distribution P (α,R)

2 periods and two objects q2 > q1effi cient allocation: the agent that arrives at the first period gets q2iif x ≥ E [X1|X2 = x ].

That isa1,2 (χk , x) = E [X1|X2 = x ] =

α+ 12α

max {R, x}

Alex Gershkov () Learning and Implementation August 2018 18 / 27

Page 28: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 4 - Incomplete Information & Learning

Example (α = 3, R = 1.5)In the effi cient allocation, the agent who arrives at the first period gets q2iff x ≥ α+1

2α R.

0 1 2 3 4 50

2

4

x

Alex Gershkov () Learning and Implementation August 2018 19 / 27

Page 29: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 4 - Incomplete Information & Learning

The key element is understanding the properties of the set

Ai ,k (χk ) = {x : ai−1,k (χk , x) ≤ x < ai ,k (χk , x)} .

Theorem (Necessary and suffi cient condition)The effi cient allocation is implementable if and only if for any period k,any object qi and any history χk the set Ai ,k (χk ) is interval.

Alex Gershkov () Learning and Implementation August 2018 20 / 27

Page 30: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 4 - Incomplete Information & Learning

Theorem (Suffi cient condition)

Assume that for any k, χk , i ∈ {0, .., k}, the cutoff ai ,k (χk , xk ) is aLipschitz function of xk with constant 1. Then, the effi cient dynamicpolicy is implementable.

Alex Gershkov () Learning and Implementation August 2018 21 / 27

Page 31: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

B 4 - Incomplete Information & Learning

Theorem

Assume that the conditional distribution function F̃k (x |xn, · · ·, xk+1) anddensity f̃k (x |xn, · · ·, xk+1) are continuously differentiable with respect toxk+i for all x and for all n− k ≥ i ≥ 1. If for all x, n− k ≥ i ≥ 1, χk andxk+i

0 ≥ ∂

∂xk+iF̃k (x |χk ) ≥ −

1n− k

∂xF̃k (x |χk )

then the effi cient dynamic policy is implementable.

Alex Gershkov () Learning and Implementation August 2018 22 / 27

Page 32: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Second Best Solution - Random Allocations

Allocation policy:

φk : Hk × [0,∞) −→ ∆Π0

Qk (Hk , x) the expected quality allocated to an agent arriving atperiod k after history Hk , and reporting x .

TheoremAn allocation policy is implementable if and only if for any k and for anyHk the expected quality Qk (Hk , x) is non-decreasing in x.

Alex Gershkov () Learning and Implementation August 2018 23 / 27

Page 33: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Second Best Solution - Random Allocations

DefinitionsFor any n−tuple γ = (γ1,γ2, ..,γn) let γ(j) denote the jth largestcoordinate (so that γ(n) ≤ γ(n−1) ≤ ... ≤ γ(1)). Let α = (α1, α2, .., αn)and β = (β1, β2, .., βn) be two n−tuples. We say that α is majorized by βand we write α ≺ β if the following system of n− 1 inequalities and oneequality is satisfied:

α(1) ≤ β(1)α(1) + α(2) ≤ β(1) + β(2)

... ≤ ...

α(1) + α(2) + ..α(n−1) ≤ β(1) + β(2) + β(n−1)α(1) + α(2) + ..+ α(n) = β(1) + β(2) + ..+ β(n)

Alex Gershkov () Learning and Implementation August 2018 24 / 27

Page 34: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Second Best Solution - Random Allocations

TheoremThe incentive compatible, optimal mechanism (second best) isdeterministic. That is, for every history at period k, Hk , and for every typex of the agent that arrives at that period, there exists a quality q that isallocated to that agent with probability 1 .

At each period, the optimal mechanism partitions the type set of thearriving agent into a collection of disjoint intervals such that all types in agiven interval obtain the same quality with probability 1, and such thathigher types obtain a higher quality.

Alex Gershkov () Learning and Implementation August 2018 25 / 27

Page 35: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Search for the Lowest Price (Rothschild 1973):

Consumer obtains a sequence of prices, and must decide when to stopthe (costly) search for a lower price.

Beliefs about distribution of prices updated (in a Bayesian way) aftereach observation.

Without learning, optimal stopping rule is characterized by areservation price R : stops search at any price less than equal to R,and continue search at any price higher than R.

If all customers follow such a rule =⇒ well-behaved demand functionwhere sales are a non-increasing function of price

When does the optimal stopping rule with learning have thereservation price property ? I.e.,when does a price R(s) exist for eachstate s such that prices above are rejected, and prices below areaccepted ?

Alex Gershkov () Learning and Implementation August 2018 26 / 27

Page 36: Dynamic Mechanism Design 1 - Learning fileAlex Gershkov Learning and Implementation August 2018 4 / 27. Illustration 1 one object two agents agents arrive sequentially, one per period

Answers to Rothschild’s Question

Stochastic Dominance u :Rosenfield and Shapiro (1981): For all x , k,χk and all n− k ≥ i ≥ 1

∫ ∞

x

∂xk+iF̃k (y |χk ) dy ≥ −

1n− k (1− F̃k (x |χk )

Seierstad (1992) : For all x , k and χk

n−k∑i=1

∂xk+iF̃k (x |χk ) ≥ −f̃k (x |χk )

Us (multiple objects !): For all x , χk ,and all n− k ≥ i ≥ 1

∂xk+iF̃k (x |χk ) ≥ −

1n− k f̃k (x |χk )

Alex Gershkov () Learning and Implementation August 2018 27 / 27