chapter 9. uncertainty

22
Dynamic Optimization Chapter 9. Uncertainty This chapter concerns choice under uncertainty. It is an important topic in economics, and is of great practical interest. In real-life, almost every decision needs to be made under uncertainty. For example, when you make up your decision to learn the current course, you will only have some estimates about its usefulness. In the end, it may be more or less useful than you thought. In this chapter, we will sketch a systematic way of making such decisions. In terms of mathematics, there will be nothing new. We will only use the tactics we learned in the previous chapters. However, you will see more economic concepts and intuitions. Let’s get familarized with the problems of choice under uncertainty. As illustrated in the course-choosing example, uncertainty means that you do not anticipate a sure outcome. To make our discussion more concrete, we will need to introduce some concepts. First, consider the following simple example: Example 9.1. Suppose that you have access to the following lottery: the lottery pays $100 with probability 1/4, and pays nothing with the remaining probability. The question is, do you wish to pay $25 for such a lottery? This is a problem of choice under uncertainty, because the outcome is uncertain: in the end, you will either get $100 or $0. There are two important elements in the problem: (i) The outcomes. In the example, the outcomes refer to the state paying $100 , and the state paying $0 . (ii) The probabilities associated with the outcomes. In the example, 1/4 is the proba- bility associated with the state paying $100 , and 3/4 is the probability associated with the state paying $0 . Note that the probabilities are objective here, but they could be referred to as subjective probabilities in certain applications. Besides, the probabilities in a well-defined problem should be non-negative, and add up to 1. In this example, we could write the consumer’s utility from the lottery could be written as follows: U ($100, $0; 1/4, 3/4). 1

Upload: others

Post on 18-Dec-2021

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 9. Uncertainty

Dynamic Optimization

Chapter 9. Uncertainty

This chapter concerns choice under uncertainty. It is an important topic in economics,

and is of great practical interest. In real-life, almost every decision needs to be made

under uncertainty. For example, when you make up your decision to learn the current

course, you will only have some estimates about its usefulness. In the end, it may be

more or less useful than you thought.

In this chapter, we will sketch a systematic way of making such decisions. In terms of

mathematics, there will be nothing new. We will only use the tactics we learned in the

previous chapters. However, you will see more economic concepts and intuitions.

Let’s get familarized with the problems of choice under uncertainty. As illustrated in the

course-choosing example, uncertainty means that you do not anticipate a sure outcome.

To make our discussion more concrete, we will need to introduce some concepts. First,

consider the following simple example:

Example 9.1. Suppose that you have access to the following lottery: the lottery pays

$100 with probability 1/4, and pays nothing with the remaining probability. The question

is, do you wish to pay $25 for such a lottery?

This is a problem of choice under uncertainty, because the outcome is uncertain: in the

end, you will either get $100 or $0. There are two important elements in the problem:

(i) The outcomes. In the example, the outcomes refer to the state paying $100 , and

the state paying $0 .

(ii) The probabilities associated with the outcomes. In the example, 1/4 is the proba-

bility associated with the state paying $100 , and 3/4 is the probability associated

with the state paying $0 .

Note that the probabilities are objective here, but they could be referred to as subjective

probabilities in certain applications. Besides, the probabilities in a well-defined problem

should be non-negative, and add up to 1. In this example, we could write the consumer’s

utility from the lottery could be written as follows: U($100, $0; 1/4, 3/4).

1

Page 2: Chapter 9. Uncertainty

Dynamic Optimization

More generally, denote the possible outcomes by Y1, Y2, ..., Ym, and the probability asso-

ciated with the outcomes by p1, p2, .., pm. The utility could be written as

U(Y1, Y2, ..., Ym; p1, p2, .., pm).

Next, we will introduce a widely-used method to express the utility in a way that more

analysis could be performed.

9.A. Expected Utility

Since probabilities are involved, it is somewhat natural to make use of mathematical

expectation, or probability weighted average. For instance, in Example 9.1, we could

express the utility as follows:

U($100, $0; 1/4, 3/4) = 1/4U($100) + 3/4U($0).

This is called the von Neumann-Morgenstern utility function, and is of expected utility

form. For a general utility function, the expected utility form is expressed as follows:

U(Y1, Y2, ..., Ym; p1, p2, .., pm) = p1U(Y1) + p2U(Y2) + ...+ pmU(Ym) =m∑i=1

piU(Yi). (9.1)

This formulation is very useful in its simplicity and its ability to capture economically

interesting aspects of behavior. We will discuss some implications of this representation.1

Risk-aversion. Now consider Yi’s as money amounts. Since more money is better, U is

an increasing function. The definition of Risk-aversion is intuitive. In Example 9.1, the

lottery gives in expectation $100×1/4 + $0×3/4 = $25. A risk-averse individual dislikes

risk, and thus prefers the sure outcome of $25 to the lottery that gives on average $25.

In general, for two distinct outcomes Y1 and Y2 with (any) positive probability p and

(1− p) respectively, a decision maker is risk-averse if

U(pY1 + (1− p)Y2) > pU(Y1) + (1− p)U(Y2).

This is, U is (strictly) concave.

1The applicability of expected utility is out of scope of this course, and thus will not be discussed.

2

Page 3: Chapter 9. Uncertainty

Dynamic Optimization

More generally, we could include more than 2 states: A decision maker is risk-averse if

U(m∑i=1

piYi) >m∑i=1

piU(Yi). (9.2)

If U is twice differentiable, U ′′ < 0 corresponds to risk-aversion.

Insurance. Let’s now bring back the decision variable x, which affect some or all of the

outcomes and probabilities. Suppose Y1 < Y2, which means that the first state entails

some loss relative to the second. Y1 occurs with probability p and Y2 with probability

(1 − p). A risk-averse decision maker would want to purchase insurance. Consider an

insurance policy that requires an advance payment of x (paid independent of the state

realization), and gives X if state 1 is realized. Suppose that the insurance policy is

actuarially fair: pX = x.2 The decision maker’s objective function is

maxx≥0

pU(Y1 − x+X) + (1− p)U(Y2 − x)

⇐⇒ maxx≥0

pU(Y1 − x+ x/p) + (1− p)U(Y2 − x)

The first-order condition for x gives:

pU ′(Y1 − x+ x/p)(1/p− 1)− (1− p)U ′(Y2 − x) ≤ 0 and x ≥ 0,

with complementary slackness. (9.3)

When x = 0, pU ′(Y1)(1/p−1)−(1−p)U ′(Y2) = (1−p)[U ′(Y1)−U ′(Y2)] >︸︷︷︸U ′′<0

0, contradicting

with (9.3). Therefore, we must have x > 0 at the optimum. By (9.3),

pU ′(Y1 − x+ x/p)(1/p− 1)− (1− p)U ′(Y2 − x) = 0

=⇒ U ′(Y1 − x+ x/p) = U ′(Y2 − x) (9.4)

When U ′′ < 0, the objective function is concave in x and the first-order condition is

also sufficient. The first-order condition (9.4) implies Y1 − x + x/p = Y2 − x. This is

the full-insurance result: a risk-averse decision maker would buy the actuarially fair

insurance to the point where the outcomes in different states are equal.

2It is an outcome of a perfectly competitive insurance industry. Insurance company could pool alarge number of independent risks, and is thus considered to be risk-free. Zero-profit condition in suchan industry is exactly pX = x.

3

Page 4: Chapter 9. Uncertainty

Dynamic Optimization

Care. Consider again the previous problem faced by the decision maker, but leave aside

insurance for the moment. Now suppose that the probability of the bad outcome (state

1) can be reduced by incurring an expense z in advance. Specifically, you could think

of it as exercising more care by yourself to reduce the probability of being ill. In terms

of modelling, we make the probability p a function of z, and the function is decreasing.

Since p is bounded below, it will generally be convex. The objective function is

maxz≥0

φ(z) ≡ maxzp(z)U(Y1 − z) + (1− p(z))U(Y2 − z)

Then, derivative of φ(z) gives

φ′(z) = −p′(z)︸ ︷︷ ︸reduction of prob.

[U(Y2 − z)− U(Y1 − z)]︸ ︷︷ ︸utility diff.︸ ︷︷ ︸

marginal benefit

− {p(z)U ′(Y1 − z) + (1− p(z))U ′(Y2 − z)}︸ ︷︷ ︸marginal cost

The optimal solution is defined by the first-order condition3:

φ′(z∗) = 0.

Moral Hazard. Now suppose both insurance and care variables are available. The

interaction between the insurance company and the decision maker could be formulated

as the game below:

1. Insurance company sells the acturially fair insurance at constant rate p(z̄) per $1

coverage. That is, if the individual purchases x shares of insurance, the insurance

company would pay the individual X = xp(z̄) when the bad outcome (state 1) occurs.

2. The decision maker chooses how much to purchase x.

3. The decision maker chooses the care parameter z.

4. Outcome realized and the decision maker gets paid from the insurance company if

the realized state is 1.

In equilibrium, insurance company holds correct belief on the optimal level of care, that

is, z̄ = z∗ where z∗ is the decision maker’s actual choice.3We impose the assumption φ′(0) → −∞ to ensure an interior solution. Similar assumptions are

usually made in economic applications.

4

Page 5: Chapter 9. Uncertainty

Dynamic Optimization

The objective function for the decision maker is

maxx≥0,z≥0

φ(x, z) ≡ maxx≥0,z≥0

p(z)U(Y1 − z − x+ x/p(z∗)) + (1− p(z))U(Y2 − z − x)

Partial derivative of φ(x, z) with respect to x gives

φx(x, z) = p(z)U ′(Y1 − z − x+ x/p(z∗))(1/p(z∗)− 1)− (1− p(z))U ′(Y2 − z − x)

Next, we show that the optimal x∗ > 0 must hold.

φx(0, z∗) = p(z∗)U ′(Y1 − z∗)(1/p(z∗)− 1)− (1− p(z∗))U ′(Y2 − z∗)

= (1− p(z∗))[U ′(Y1 − z∗)− U ′(Y2 − z∗)] >︸︷︷︸U ′′<0

0.

Since φx(0, z∗) > 0, marginally increase x will increase φ(x, z∗), and therefore, x∗ > 0

must hold. First-order condition on x gives

φx(x∗, z∗) = 0

=⇒ U ′(Y1 − z∗ − x∗ + x∗/p(z∗)) = U ′(Y2 − z∗ − x∗)

=⇒ Y1 − z∗ − x∗ + x∗/p(z∗) = Y2 − z∗ − x∗ (9.5)

Therefore, the optimal choices of x∗ and z∗ must satisfy (9.5) above. Let Y1 − z∗ − x∗ +

x∗/p(z∗) = Y2 − z∗ − x∗ = Y0. Partial derivatvie of φz(x, z) with respect to z gives

φz(x, z) =−p′(z) [U(Y2 − z − x)− U(Y1 − z − x+ x/p(z∗))]︸ ︷︷ ︸marginal benefit

− [p(z)U ′(Y1 − z − x+ x/p(z∗)) + (1− p(z))U ′(Y2 − z − x)]︸ ︷︷ ︸marginal cost

Evaluated at the optimal level (x∗, z∗), we have

φz(x∗, z∗) =− p′(z∗) [U(Y2 − z∗ − x∗)− U(Y1 − z∗ − x∗ + x∗/p(z∗))]︸ ︷︷ ︸=0

− [p(z∗)U ′(Y1 − z∗ − x∗ + x∗/p(z∗)) + (1− p(z∗))U ′(Y2 − z∗ − x∗)]︸ ︷︷ ︸=U ′(Y0)

=− U ′(Y0) < 0.

Since φz(x∗, z∗) < 0, marginally decrease z∗ will increase φ(x∗, z∗), and therefore, the

optimum of care occurs at the corner z∗ = 0. This is known as “moral hazard” : the

availability of full insurance destroys the incentive to exercise costly care.

5

Page 6: Chapter 9. Uncertainty

Dynamic Optimization

In the sebsections that follow, we will study portfolio choice. Since we will be working

with multiple states, for convenience, we introduce a continuous representation. The

index i is replaced by a continuous random variable r with support [r, r]. The expected

utility form in (9.1) is modified by replacing probabilities with densities, and sums with

integrals:

E[U(Y )] =∫ r

rU(Y (r))f(r)dr.

The interpretation of risk-aversion parallels (9.2): A decision-maker is risk averse if

U(E(Y )) > E[U(Y )] ⇐⇒ U(∫ r

rY (r)f(r)dr) >

∫ r

rU(Y (r))f(r)dr.

9.B. One Safe and One Risky Asset

A risk-averse investor has initial wealthW0, and has the following two investment options:

(i) A risky asset: investing x gives x(1 + r), where r is a random variable with density

f(r) and support [r, r]; Assume E[r] > 0 and r < 0, so that the risky asset does

not always generate a positive return, but on average the return is positive.

(ii) A safe asset: investing x gives x.

Therefore, investing x ∈ [0,W0] in the risky asset and the rest in the safe asset generates

final wealthW = x(1 + r) + (W0 − x) = W0 + xr.

The investor’s objective is to maximize the expected final wealth:

maxx∈[0.W0]

E[(U(W ))] ≡ maxx∈[0.W0]

∫ r

rU(W0 + xr)f(r)dr.

Let φ(x) = E[(U(W ))]. Derivative of φ(x) gives

φ′(x) =∫ r

rrU ′(W0 + xr)f(r)dr.

Note when x = 0:

φ′(0) =∫ r

rrU ′(W0)f(r)dr = U ′(W0)

∫ r

rrf(r)dr = U ′(W0)E[r] > 0.

6

Page 7: Chapter 9. Uncertainty

Dynamic Optimization

So, x = 0 is not optimal. Therefore, the risk-averse investor will buy at least some of the

actuarially good investment.

Typically, the investor will hold some of each asset. The first-order condition is

φ′(x) =∫ r

rrU ′(W0 + xr)f(r)dr = 0. (9.6)

If there is an x < W0 satisfying this, then strict concavity of U guarantees that it is the

global maximum:

φ′′(x) =∫ r

rr2 U ′′(W0 + xr)︸ ︷︷ ︸U ′′(W )<0 for all W

f(r)dr < 0. (9.7)

Next, assuming that there exists an interior maximum, we consider the comparative

statics of x with respect to W0. That is, whether the investor would invest more or less

in the risky asset when he becomes wealthier. Now, we recognizeW0 as a parameter in φ,

i.e., φ(x,W0). Then, first-order condition for an interior solution is φx(x,W0) = 0. Total

differentiation gives

φxx(x,W0)dx+ φxw(x,W0)dW0 = 0 =⇒ dx/dW0 = −φxw(x,W0)/φxx(x,W0).

By second-order sufficient condition, φxx(x,W0) < 0.

Then, the sign of dx/dW0 is the same as the numerator on the right-hand side:

φxw(x,W0) =∫ r

rrU ′′(W0 + xr)f(r)dr.

In general, we could not decide the sign of φxw(x,W0).

To gain more insight, we introduce a measure of risk-aversion, called absolute risk-

aversion, and denoted by A(W ):

A(W ) = −U ′′(W )/U ′(W ). (9.8)

Experimental and empirical evidence is consistent with A(W ) being decreasing in W .4 If

this is the case, then we would be able to show φxw(x,W0) > 0.

4Friend, I., & Blume, M. E. (1975). The Demand for Risky Assets. American Economic Review,65(5), 900-922.

7

Page 8: Chapter 9. Uncertainty

Dynamic Optimization

Here comes the detailed reasoning:

(i) For r < 0,

− U ′′(W0 + xr)/U ′(W0 + xr) > −U ′′(W0)/U ′(W0) = A(W0)

=⇒ rU ′′(W0 + xr)/U ′(W0 + xr) > −rA(W0)

=⇒ rU ′′(W0 + xr) > −rA(W0)U ′(W0 + xr).

(ii) For r > 0,

− U ′′(W0 + xr)/U ′(W0 + xr) < −U ′′(W0)/U ′(W0) = A(W0)

=⇒ − rU ′′(W0 + xr)/U ′(W0 + xr) < rA(W0)

=⇒ rU ′′(W0 + xr) > −rA(W0)U ′(W0 + xr).

Then, rU ′′(W0 + xr) > −rA(W0)U ′(W0 + xr) for all r 6= 0.

Therefore,

φxw(x,W0) =∫ r

rrU ′′(W0 + xr)f(r)dr >

∫ r

r−rA(W0)U ′(W0 + xr)f(r)dr

= −A(W0)∫ r

rrU ′(W0 + xr)f(r)dr︸ ︷︷ ︸=0 by Equation (9.6)

= 0.

Thus, φxw(x,W0) > 0, which implies dx/dW0 > 0. That is, the investor would invest

more in the risky asset when he becomes wealthier. Remember that we need the absolute

risk-aversion A(W ) being a decreasing function for this result to hold.

9.C. Examples

Example 9.1: Managerial Incentives. A risk-neutral owner (she) has to hire a risk-

neutral manager (he) to run a project. If the project succeeds, it will produce value V . If

the project fails, it generates no return. The probability of success depends on manager’s

effort. Given that the manager exerts effort, the project will succeed with probability p.

No effort will reduce the probability to q < p. Effort cost is e To make it worthwhile to

8

Page 9: Chapter 9. Uncertainty

Dynamic Optimization

exert effort, suppose that exerting effort generates higher total surplus:

pV − e > qV =⇒ (p− q)V > e. (9.9)

Assume that the manager’s outside job pays him w.

What is optimal compensation scheme when(i) The owner can observe manager’s effort?

(ii) The owner cannot observe manager’s effort?

Solution.

Case I: Obervable Effort.

In this case, the owner could compensate effort directly. Since inducing effort generates

higher total surplus, the owner would be willing to do so as long as it is not too expensive

to attract the manager. Let the payment to manager be W , paid when the manager

exerts effort. The manager is willing to work for the owner and exert effort if

W − e ≥ w =⇒ W ≥ e+ w.

To get more profit, the owner would pay the least amount W = e + w. After paying W

to the manager, the owner gets pV −W = pV − e−w. The owner thus is willing to hire

the manager ifw < pV − e. (9.10)

(9.10) is an assumption that we would make throughout the analysis, since otherwise,

the manager would not be hired.

Under the assumption, the owner could offer w+ e to the manager, and demand effort in

return. The owner would get pV − e−w and the manager gets w+ e− e = w, the same

as what he would get from the outside job.

Case II: Unobservable Effort.

In this case, compensating effort directly would not work. To see this, suppose that

compensation is still based on (now unoberservable) effort, then the manager could lie

about effort: the manager could promise to exert effort, but shirk instead. Because of

unobservability of effort and the probalistic nature of the outcome, the owner would not

catch such a lie.

9

Page 10: Chapter 9. Uncertainty

Dynamic Optimization

Therefore, the best thing the owner could do is to base his payment scheme on the thing

that he could observe, i.e., the outcome. Suppose that the owner pays the manager x if

the project succeeds, and y if it fails. Two constraints need to be satisfied:

(i) Given such a payment scheme, the manager would exert effort if

px+ (1− p)y − e ≥ qx+ (1− q)y

=⇒ (p− q)(x− y) ≥ e. (IC)

This is called the incentive compatibility constraint.

(ii) The manager will agree to work if

px+ (1− p)y − e ≥ w

=⇒ y + p(x− y) ≥ w + e. (IR)

This is called the participation constraint, or individual rationality constraint.

Thus, the owner’s problem is to maximize her profit subject to constraints (IC) and (IR).

maxx,y

pV − [px+ (1− p)y] ≡ maxx,y

pV − y − p(x− y)

s.t. (IC) & (IR)

(IR) must be binding since otherwise, we could decrease x and y by the same sufficiently

small amount so that (IR) is still satisfied. After the change, (IC) is unaffected, however

the objective function gets larger. So, the payment scheme with non-binding (IR) must

not be optimal, and thus (IR) must be binding.

Solving the problem using Lagrange’s method, we get

y∗ ≤ w − eq

p− qand x∗ = (w + e+ (1− p)y∗)

p

(≥ w + e(1− q)

(p− q)

).

Therefore, one interpretation is that the manager’s compensation consists of the basic

salary w, plus a reward for success and minus a penalty for failure. The owner’s expected

profit isπ = pV − y∗ − p(x∗ − y∗) = pV − w − e,

the same as when she could observe the manager’s effort directly.

10

Page 11: Chapter 9. Uncertainty

Dynamic Optimization

However, one potential problem here is that y∗max = w − eq/(p− q) is not guaranteed to

be positive. y∗ < 0 means that the payment scheme would involve a fine under failure,

which is always not feasible.

Suppose y∗max = w − eq/(p − q) < 0 and fine is not allowed, i.e., y ≥ 0 is required. The

solution is to go as far as possible, i.e., y = 0. (IC) and (IR) becomes

(p− q)(x− 0) ≥ e =⇒ x ≥ e

p− q; (IC’)

0 + p(x− 0) ≥ w + e =⇒ x ≥ w + e

p. (IR’)

The problem becomes:maxx

pV − 0− p(x− 0) ≡ maxx

pV − px

s.t. (IC ′) & (IR′)

The owner wants x to be as small as possible.

From y∗max = w − eq/(p− q) < 0, we have

(w + e)/p < e/(p− q). (9.11)

Therefore, the minimum x isx∗∗ = e/(p− q).

And the profit becomes

π = pV − px∗∗ = pV − pe/(p− q).

By (9.11), this profit level is lower compared to the previous cases, π = pV − w − e.

By (9.9), this profit level is still positive.

Example 9.2: Cost-Plus Contracts. This example is motivated by the cost-plus

contract. Government expenditures are often made on such a cost-plus basis, that is, the

government reimburses the supplier’s cost plus a normal profit. In this example, we are

concerned with the appropriated amount of reimbersement when the government does

not observe the supplier’s cost.

Suppose the true average cost of production can take just two values: c1 and c2 (normal

profit included), with 0 < c1 < c2. We call the supplier with cost ci Type-i supplier.

11

Page 12: Chapter 9. Uncertainty

Dynamic Optimization

The supplier is privately informed of its own type. Before contracting, the government’s

estimate of the probability of the supplier being Type-1 is β1, and Type-2 is β2 = 1− β1.

The problem here is that the low cost supplier would pretend to be of high cost and

get more reimbersement from the government. To mitigate the problem, the government

could purchase different amounts and offer distinct payments, depending on the cost

declared by the supplier. More specifically, the government would offer the following

contract, if the supplier claims to have cost ci for i = 1, 2, the government purchases qi

units and pays Ri. In game theory, the use of different contracts to separate supplier

types is called “screening”.5

The governments gets benefit B(q) from quantity q. B(q) is strictly increasing, strictly

concave in q, andB′(0) > c2 (A)

so that the government would demand positive quantities from supplier of either type if

cost can be observed.

What is the optimal menu of contracts (q1, R1) and (q2, R2) when cost is unobservable?

Solution. Before analyzing the problem with unobservable cost, we will first analyze

the problem with observable cost.

Observable Cost. When cost is observable, the government could design contract

based on supplier’s true type. The government’s problem facing supplier with Type-i is:

maxqi,Ri

B(qi)−Ri

s.t. Ri − ciqi ≥ 0. (IR)

qi ≥ 0, Ri ≥ 0.

The government’s objective is to maximize the benefit from purchase of qi minus the reim-

bersement Ri. The constraint (IR) is the supplier’s participation constraint, or individual

rationality constriant. This constraint ensures that the supplier is better-off accepting

5More generally, “screening models” refers to the situation where an agent (the supplier here) hasprivate information about his type before the principal (the government here) makes a contract offer.The principal will then offer a menu of contracts in order to separate the different types.

12

Page 13: Chapter 9. Uncertainty

Dynamic Optimization

the government’s contract, which pays Ri and incurs the cost of ciqi, compared to not

contracting with the government, which generates 0.

Before solving the problem, we make two observations:1. (IR) must be binding. Otherwise, the government could demand a higherRi without

violating the constraints, and the objective function becomes larger.

2. Ri ≥ 0 is implied from (IR) and qi ≥ 0.

The government’s problem is reduced to

maxqi

B(qi)− ciqi

s.t. qi ≥ 0.

Next, we solve the problem using the Lagrange’s theorem.

• Form the Lagrangian:L(qi) = B(qi)− ciqi.

• The first-order necessary condition is

∂L/∂qi = B′(qi)− ci ≤ 0 and qi ≥ 0 with complementary slackness.

By Assumption (A) and the strict concavity of B(q), we have qi > 0 and

B′(qi) = ci. (9.12)

Thus, the optimal qi is given by (9.12). After obtaining qi, the optimal Ri is given by

binding (IR).

Unobservable Cost. When cost is unobservable, the government cannot offer con-

tract based on supplier’s type. As indicated in the question, the government would offer

two contracts and let the supplier choose.

To make the suppliers willing to choose the contract designed for them, we must en-

sure that Type-1 supplier prefers its contract to Type-2’s, and similarly, Type-2 supplier

prefers its contract to Type-1’s:

R1 − c1q1 ≥ R2 − c1q2; (IC1)

R2 − c2q2 ≥ R1 − c2q1. (IC2)

These are the incentive compatibility constraints.

13

Page 14: Chapter 9. Uncertainty

Dynamic Optimization

Moreover, we need to ensure that the suppliers would want to participate:

R1 − c1q1 ≥ 0; (IR1)

R2 − c2q2 ≥ 0. (IR2)

These are the participation constraints.

We also need to ensure qi ≥ 0 and Ri ≥ 0. The government’s problem is

maxq1,q2,R1,R2

β1 [B(q1)−R1] + β2 [B(q2)−R2]

s.t. (IC1), (IC2), (IR1), (IR2)

q1 ≥ 0, q2 ≥ 0, R1 ≥ 0, R2 ≥ 0.

It is a maximization problem with 4 inequality constraints and 4 non-zero variables.

These inequality pairs permit 28 = 256 patterns of equations. Solving the problem

directly involves a lot of work. So, we will make some initial analysis to simplify the

problem.

Lemma 1. Ri ≥ 0 is implied by (IR1), (IR2) and qi ≥ 0.

Proof. We take R1 as an example.

R1 ≥︸︷︷︸(IR1)

c1q1 ≥︸︷︷︸q1≥0

0.

R2 ≥ 0 follows similarly.

So, we could safely ignore the non-negativity constraints on Ri.

Lemma 2. (IR1) is implied by (IC1), (IR2) and q2 ≥ 0.

Proof. We want to show that (IR1) holds, i.e., R1 − c1q1 ≥ 0.

R1 − c1q1 ≥︸︷︷︸(IC1)

R2 − c1q2 ≥︸︷︷︸(IR2)

c2q2 − c1q2 = (c2 − c1)q2 ≥︸︷︷︸q2≥0, c1<c2

0

(IR1) is implied.

So, we could safely ignore (IR1).

14

Page 15: Chapter 9. Uncertainty

Dynamic Optimization

From Lemma 1 and Lemma 2, the government’s problem could be simplified as follows:

maxq1,q2,R1,R2

β1 [B(q1)−R1] + β2 [B(q2)−R2]

s.t. R1 − c1q1 ≥ R2 − c1q2; (IC1)

R2 − c2q2 ≥ R1 − c2q1; (IC2)

R2 − c2q2 ≥ 0; (IR2)

q1 ≥ 0, q2 ≥ 0.

Lemma 3. (IR2) must be binding in the optimal scheme, i.e., R2 − c2q2 = 0.

Proof. Suppose not, i.e., R2−c2q2 > 0, then q1 and q2 can be slightly raised by the same

amount ε so that (IR2) still holds. After such a change,

(i) All constraints still holds. Such change will affect the left-hand side and the right-

hand side of (IC1) and (IC2) equally, so the inequalities still hold. The non-

negativity constraints on qi still hold as q1 and q2 become larger.

(ii) The objective function gets larger since B(q) is strictly increasing.

Therefore, a scheme with R2− c2q2 > 0 must not be optimal. In other words, (IR2) must

be binding in the optimal scheme.

Lemma 4. (IC1) must be binding in the optimal scheme, i.e., R1 − c1q1 = R2 − c1q2.

Proof. Suppose not, i.e., R1 − c1q1 > R2 − c1q2. Then q1 can be slightly raised by ε so

that (IC1) still holds. After such a change,

(i) All constraints still hold. Such a change will not affect (IR2) and q2 ≥ 0. The non-

negativity constraints on q1 still hold as q1 gets larger. For (IC2), the right-hand

side gets smaller and the inequality still holds.

(ii) The objective function gets larger since B(q1) is strictly increasing.

Therefore, a scheme with R1 − c1q1 > R2 − c1q2 must not be optimal. In other words,

(IC1) must be binding in the optimal scheme.

By Lemma 3 and Lemma 4, we have

R2 = c2q2 (R2)

R1 = c1q1 + (c2 − c1)q2 (R1)

15

Page 16: Chapter 9. Uncertainty

Dynamic Optimization

Plugging (R2) and (R1) into (IC2), we have

R2 − c2q2 ≥ R1 − c2q1 ⇐⇒︸ ︷︷ ︸(R2), (R1)

(c2 − c1)(q1 − q2) ≥ 0 ⇐⇒︸ ︷︷ ︸c1<c2

q1 ≥ q2 (IC2’)

Plugging (R2) and (R1) into the objective function, and recognizing the remaining con-

straints, the maximization problem is simplied as follows:

maxq1,q2

β1 [B(q1)− (c1q1 + (c2 − c1)q2)] + β2 [B(q2)− c2q2]

s.t. q1 ≥ q2 (IC2’)

q2 ≥ 0

We could solve the maximzation problem in the usual way. See Appendix A.

Here, we introduce another way of solving the problem. We first solve the relaxed problem

with no (IC2’), and then show that the solution to the relaxed problem satisfies (IC2’)

and is thus also the solution to the initial problem.

The relaxed problem is as follows:

maxq1,q2

β1 [B(q1)− (c1q1 + (c2 − c1)q2)] + β2 [B(q2)− c2q2]

s.t. q2 ≥ 0

1. Form the Lagrangian:

L(q1, q2) = β1 [B(q1)− (c1q1 + (c2 − c1)q2)] + β2 [B(q2)− c2q2] .

2. Apply Lagrange’s theorem to right out the first-order necessary conditions:

∂L/∂q1 = β1[B′(q1)− c1] = 0 (9.13)

∂L/∂q2 = −β1(c2 − c1) + β2[B′(q2)− c2] ≤ 0 and q2 ≥ 0 with complemetary slackness.

(9.14)

From (9.13), we have

B′(q1) = c1.

Note that it is the same as the condition for Type-1 when cost is observable, see (9.12).

16

Page 17: Chapter 9. Uncertainty

Dynamic Optimization

From (9.14),

(i) Case I: q2 = 0. Then by (9.14), we must have

−β1(c2 − c1) + β2[B′(0)− c2] ≤ 0 =⇒ B′(0) ≤ c2 + β1

β2(c2 − c1).

(ii) Case II: q2 > 0. Then by (9.14), we must have

− β1(c2 − c1) + β2[B′(q2)− c2] = 0 =⇒ B′(q2) = c2 + β1

β2(c2 − c1). (9.15)

For q2 > 0, by strict concavity of B(q2), we need

B′(0) > c2 + β1

β2(c2 − c1).

Therefore, the solution to the relaxed problem isB′(q1) = c1, q2 = 0 if B′(0) ≤ c2 + β1

β2(c2 − c1);

B′(q1) = c1, B′(q2) = c2 + β1

β2(c2 − c1) if B′(0) > c2 + β1

β2(c2 − c1).

(9.16)

Next, we show that (9.16) also solves the initial problem. We need to show that (IC2’),

i.e., q1 ≥ q2 holds in (9.16).

(i) Case I: q2 = 0. q1 > 0 and q2 = 0, thus q1 > q2.

(ii) Case II: q2 > 0. Since c1 < c2 < c2 + β1β2

(c2 − c1), we have B′(q1) < B′(q2). Strict

concavity of B(q) implies q1 > q2.

Therefore, the solution (9.16) is the solution to the simplied problem and is thus the

solution to the government’s maximization problem. Besides, the payments R1 and R2

are given by (R1) and (R2).

Remark. The logic is similar to “(since you are a student in Economic and Management

School of Wuhan university), if you are the best student in Wuhan university, then you

are the best student in the Economic and Management school of Wuhan university.” If

we want to find “the best student in the EMS of Wuhan university”, we could relax the

problem and search for the best student in the whole university. The relaxed problem

is easier to solve, since it involves less constraints. However, the solution to the relaxed

problem may not be the solution to the initial problem. You must make sure that the

conditions left out are indeed satisfied.

17

Page 18: Chapter 9. Uncertainty

Dynamic Optimization

There are several points worth noticing.

1. When B′(0) ≤ c2 + β1β2

(c2 − c1), the high cost supplier does not produce. This is

likely to happen when

a) β2 is small: the probability that the supplier is a high cost one is low, and

b) B′(0) is small: the benefit of having a high cost supplier producing is small.

If this is the case, by placing q2 = 0, the government can effectively eliminate the

incentive of a low cost supplier to pretend to be a high cost one.

2. When q2 > 0, by strict concavity of B(q), the optimal q2, which is given by (9.15),

is lower than the optimal q2 when cost is observable, which is given by (9.12):

B′(q2) = c2.

Lowering the quantities demanded for the high-cost supplier makes it less tempting

for low-cost supplier to declare high cost.

To elaborate the idea, observe that in equilibrium, the low-cost supplier gets

R1 − c1q1 =︸︷︷︸(R1)

[c1q1 + (c2 − c1)q2]− c1q1 = (c2 − c1)q2.

This gain is referred to as the low-cost supplier’s information rent. The source of

this information rent comes from its ability to mimic high-cost supplier. Lowering

q2 reduces this information rent, and thus reduces its incentive to declare high cost.

For the government, lowering q2 allows it to make less payment R1 to the low-

cost supplier while still keep the low-cost supplier indifferent. The optimal q2 thus

balances the tradeoff between improving efficiency (q2 closer to the observable cost

case) and saving information rent (R1 smaller).

We could also graphically illustrate the idea. Figure 9.1 below illustrate the cases with

observable cost. Figure 9.1a shows the government’s maximization problem when the

cost is c1. The orange area is the constraint. The green area is the upper contour set of

the objective function. The maximum is attained when the two curves are tangent. In

Figure 9.1b, we put the the maximziation problem with Type-2 supplier on top of the

previous graph. Since c1 < c2, the line with c2 is steeper.

18

Page 19: Chapter 9. Uncertainty

Dynamic Optimization

(a) (b)

Figure 9.1: Observable Cost

From Figure 9.1b, the contract for Type-2 supplier (q∗2, R∗2) lies inside the orange area.

Furthermore, Type-1 supplier prefers this contract to the contract designed for him

(q∗1, R∗1). Therefore, if the government cannot observe the supplier type, the contracts

in Figure 9.1b will not be incentive compatible.

To make the contract incentive compatible when costs are unobservable, one simple way

is to keep (q∗2, R∗2) unchanged, and move the red line upward so that Type-1 supplier is

indifferent between the two contracts. This is shown in Figure 9.2a. Now Type-1 supplier

gets positive surplus, which is referred to as information rent.

(a) (b)

Figure 9.2: Unobservable Cost

19

Page 20: Chapter 9. Uncertainty

Dynamic Optimization

The government can do better than this. It is possible to reduce the information rent left

to the low cost supplier. However, there is a price associated with such an act. To keep

the contract incentive compatible to the low cost supplier, (q∗2, R∗2) must move away from

the original optimal contract, that is, the efficiency is decreased. The efficiency concern

and the rent extraction is the key trade-off faced by the government. The optimal solution

is the one we calculated before.

20

Page 21: Chapter 9. Uncertainty

Dynamic Optimization

Appendix A

Solving the maximization problem using the Lagrange’s method.

maxq1,q2

β1 [B(q1)− (c1q1 + (c2 − c1)q2)] + β2 [B(q2)− c2q2]

s.t. q1 ≥ q2 (IC2’)

q2 ≥ 0

1. Form the Lagrangian:

L(q1, q2, λ) = β1 [B(q1)− (c1q1 + (c2 − c1)q2)] + β2 [B(q2)− c2q2] + λ(q1 − q2).

2. Apply Kuhn-Tucker theorem to write out the first-order necessary conditions:

∂L/∂q1 = β1[B′(q1)− c1] + λ = 0 (9.17)

∂L/∂q2 = −β1(c2 − c1) + β2[B′(q2)− c2]− λ ≤ 0 and q2 ≥ 0 with complemetary slackness;

(9.18)

∂L/∂λ = q1 − q2 ≥ 0 and λ ≥ 0 with complemetary slackness. (9.19)

Next, we solve the problem using conditions (9.17), (9.18) and (9.19).

(i) Case I: q2 = 0.

a) Subcase I: q1 = q2 = 0. Then (9.18) becomes

− β1(c2 − c1) + β2[B′(0)− c2]− λ ≤ 0

=⇒︸ ︷︷ ︸(9.17)

− β1(c2 − c1) + β2[B′(0)− c2] + β1[B′(0)− c1] ≤ 0

=⇒ B′(0) ≤ c2

Contradicting with (A).

b) Subcase II: q1 > q2 = 0. Since q1 > q2, by (9.19), we have λ = 0. Plugging

into (9.17), we have

B′(q1) = c1. (9.20)

21

Page 22: Chapter 9. Uncertainty

Dynamic Optimization

We also need to ensure that (9.18) holds:

−β1(c2 − c1) + β2[B′(0)− c2] ≤ 0 =⇒ B′(0) ≤ c2 + β1

β2(c2 − c1).

(ii) Case II: q2 > 0. (9.18) becomes

− β1(c2 − c1) + β2[B′(q2)− c2]− λ = 0

=⇒︸ ︷︷ ︸(9.17)

− β1(c2 − c1) + (1− β1)[B′(q2)− c2] + β1[B′(q1)− c1] = 0

=⇒ (1− β1)B′(q2) + β1B′(q1)− c2 = 0 (9.18’)

a) Subcase I: q1 = q2 > 0. (9.18’) becomes

B′(q1) = c2.

Plugging into (9.17), we have λ = −β1[c2− c1] < 0. Contradicting with (9.19).

b) Subcase II: q1 > q2 > 0. By (9.19), λ = 0. Then (9.17) becomes

β1[B′(q1)− c1] = 0 =⇒ B′(q1) = c1.

Plugging into (9.18’), we have

B′(q2) = c2 + β1

β2(c2 − c1).

For q2 > 0, we need

B′(0) > c2 + β1

β2(c2 − c1).

To conclude, the solution isB′(q1) = c1, q2 = 0 if B′(0) ≤ c2 + β1

β2(c2 − c1);

B′(q1) = c1, B′(q2) = c2 + β1

β2(c2 − c1) if B′(0) > c2 + β1

β2(c2 − c1).

22