strategic exploitation of a common-property resource under uncertainty
TRANSCRIPT
Electronic copy available at: http://ssrn.com/abstract=789768
Strategic Exploitation of
a Common-Property Resource
under Uncertainty
Elena Antoniadou, Christos Koulovatianos,1
Leonard J Mirman2 3
May 16, 2009
1Corresponding author: Department of Economics, University of Exeter, Streatham
Court, Rennes Drive, Exeter, EX4 4PU, UK, Tel: +44-1392-26-3219, Fax: +44-1392-
26-3242.2Contacts: Elena Antoniadou: Australian National University,
[email protected]; Christos Koulovatianos: University of Exeter;
[email protected]; Leonard J Mirman: University of Virginia,
[email protected] thank Gerhard Sorger, seminar participants in Vienna, and an anonymous
referee for useful comments. Koulovatianos thanks the Austrian Science Fund under
project P17886, for financial support.
Electronic copy available at: http://ssrn.com/abstract=789768
Abstract
We study the impact of uncertainty on the strategies and dynamics of symmet-
ric non-cooperative games among players who exploit a non-excludable resource
that reproduces under uncertainty. We focus on a particular class of games that
deliver a Nash equilibrium in linear-symmetric strategies of resource exploita-
tion. We show that, for this class of games, the tragedy of the commons is
always present. For various changes in the riskiness of the random primitives
of the model we provide general characterizations of features of the model that
explain links between the degree of riskiness and strategic exploitation decisions.
Finally, we provide a specific example that demonstrates the usefulness of our
general results and, within this example, we study cases where increases in risk
amplify or mitigate the tragedy of the commons.
Keywords: Renewable resource exploitation, Stochastic non-cooperative dy-
namic games, Tragedy of the commons
JEL Classification Codes C73, C72, C61, Q20, O13, D90, D43
1 Introduction
Games of common-property renewable resource exploitation have the feature
that each player chooses their current level of exploitation while (partly) con-
trolling the future evolution of the resource, given the strategies of other players.
Models in which there is a dynamic element and in which resources are shared,
play an important role in economics, e.g., industrial organization models or mod-
els with natural resources. The fundamental, infinite-horizon setup, where all
players have full information about the economic environment, has been studied
in the economics literature almost exclusively within the deterministic frame-
work. The main finding of this literature is that the equilibrium is characterized
by a ‘tragedy of the commons.’ Namely, the higher the number of players, the
higher the aggregate exploitation rate, so the lower the level of the resource in
the long run.1
The tragedy of the commons results after a complex strategic accounting
by players in a perfect-foresight environment. In particular, it is transparent
to each player that overexploitation implies future trajectories of the common
resource that increase their present and future individual costs. Yet, given what
other players do, and what they will be doing throughout the infinite horizon,
it is often individually optimal to support a strategic path of aggregate overex-
ploitation of the resource, when the number of players increases. How does such
strategic behavior change if, instead of acting in a deterministic environment,
players act in an environment of uncertainty and rational expectations? Our
study contributes some answers to this general question, focusing on infinite-
horizon games where the law of reproduction of the resource is subject to random
shocks and players have rational expectations.2
The scope and aim of our analysis is to contribute to the understanding of
common-property resource games under uncertainty. Yet, stochastic dynamic
games can be particularly complex and very difficult to characterize with general
primitives, i.e. with general payoff functions of players, general resource or
renewal laws of motion and general distributions of random disturbances.3At the
1See, for example, Mirman [16] and Levhari and Mirman [13], Levhari, Michener and
Mirman [12], Benhabib and Radner [2], Dockner and Sorger [6], Sorger [18] and [19] and
Koulovatianos and Mirman [11].2Our focus on randomness in resource reproduction is a natural starting point for the
study of uncertainty in resource games. In the real world, resources evolve according to
stochastic laws of motion. Especially when in the context of natural resources, as is the
case with biological populations such as forests and fish species, these evolve subject to the
existence of predators or climate, that are affected by random disturbances. In cases of
governmental provisions of infrastructure for companies, such as railroads, electricity grids,
telecommunication networks, etc., financing and maintenance is also subject to random shocks,
such as business cycles or political cycles.3An example of a study examining the link between extraction decisions and uncertain
reproduction outcomes under perfect competition, and also optimal resource preservation
policies is the fishery application of Mirman and Spulber [17]. For a paper studying uncertainty
and games see Amir [1]. For studies pointing out technical issues in deterministic differential
resource games, such as multiplicity of equilibrium strategies, arising even in setups with some
simplifying assumptions on primitives, see Dockner and Sorger [6], and Sorger [18], while for
fundamental proofs of equilibrium existence see Sundaram [23] and Dutta and Sundaram [7].
1
same time, the task of characterizing decisions in the presence of uncertainty in
a general framework can be very demanding even when there is only one player.4
We discuss several technical problems that arise in general dynamic games. In
particular, in resource games, part of the objective function of each player is
the other players’ strategies. When the Markov strategies of other players are
strictly concave, a player’s objective function may lose key properties, such
as concavity, differentiability, and continuity. These technical difficulties are
discussed in Mirman [16].
One special class of dynamic games where such difficulties can be avoided
is where there is a symmetric Markov-Nash equilibrium of the game which is
in linear Markov strategies.5 A game that falls in this class is the parametric
example of Levhari and Mirman [13]. Our paper is devoted to the study of this
tractable class of games and equilibria, a natural starting point for undestanding
players’ strategic behavior under uncertainty. In section 3 we provide necessary
and sufficient conditions, on the payoff functions of players and on the law of
resource reproduction, for the existence of an equilibrium in linear-symmetric
strategies.6 In section 8 we provide a parametric example that falls into this
class of games.
We deal with two basic issues concerning all models in this special class
of dynamic games, where there is a Nash equilibrium is in linear-symmetric
Markov strategies. The first is whether the tragedy of the commons holds in this
equilibrium. In Section 4 we show that all games, stochastic or deterministic,
that have a Markov-Nash equilibrium in linear-symmetric Markov strategies,
always imply a tragedy of the commons in this equilibrium. This means, in
this class of games, adding one more player always leads to a higher aggregate
exploitation rate in the unique equilibrium. Secondly, we study the effect of
the riskiness of the random primitives of the model on the players’ strategies.
In particular, we highlight those features of the model that explain the links
between the degree of riskiness and strategic exploitation decisions. In Section
4 we show that uncertainty may increase or decrease the equilibrium aggregate
level of exploitation, relative to the deterministic case.
In Section 5 we study the applicability of a recursive procedure for calcu-
lating linear-symmetric strategies. Levhari and Mirman [13] used a method for
solving dynamic games that started from the one-period problem and contin-
ued recursively for the -period problem, in order to obtain the infinite-horizon
solution asymptotically. We find that for the class of games that lead to lin-
4For example, Mirman [15] analyzes uncertainty in a model with a single controller, pro-
viding a general result about the role of uncertainty on decisions in two-period models, and
discussing issues arizing in the infinite-horizon setup.5We thank an anonymous referee for highlighting the need to clarify this issue. Our analysis
does not restrict the search for optimal strategies within the linear class. Rather, it restricts
attention to those dynamic games which have a symmetric equilibrium in linear Markov
strategies, among all possible Markov strategies.6This is not a global uniqueness result. However, linear-symmetric equilibrium strategies,
where they exist, are unique amongst linear strategies. To the best of our knowledge there
are no known sufficient conditions for global uniqueness of a Markov-Nash equilibrium, for
the general game we study.
2
ear symmetric Markov strategies the calculation method used by Levhari and
Mirman [13] always converges to the infinite-period solution, as long as linear-
symmetric solutions for the -period problem exist.7
We use these results in Section 6 to examine the link between the degree
of riskiness and strategic decisions. For games with the same payoff functions
and the same natural law of resource reproduction, we examine how different
shocks that are linked through second- or first-order stochastic dominance affect
strategies. In all cases we identify simple conditions on the payoff function of
players that explain the direction of the change in exploitation rates as the
stochastic structure of the shocks changes.
We then study specific parametric models in order to demonstrate the useful-
ness of our results. In section 7 we show the Levhari-Mirman [13] in the presence
of uncertainty, and in Section 8 we present an extended parametric model that
nests the Levhari-Mirman [13] example while maintaining the property of a
unique Markov-Nash equilibrium which has linear-symmetric strategies, but in
which uncertainty plays a crucial role. We show that the Levhari-Mirman [13]
model is a knife-edge case where the presence of uncertainty has no impact on
the strategies. Our parametric model is a key contribution of this paper, indi-
cating a new benchmark for initiating interesting questions about uncertainty
in games. One question that becomes possible to analyze with this parametric
model, is the effect of changes in risk on the intensity of the ‘tragedy of the
commons.’ In the context of this model, we show that, indeed, there are cases
where the ‘tragedy of the commons’ is mitigated, and cases where it is amplified,
as the random parameter becomes more risky.
We begin in Section 2 with a description of the general framework for our
analysis.
2 The general framework
Time is discrete and the horizon is infinite, i.e. = 0 1 . Let the state
variable, , evolve naturally (when no consumption occurs) according to the
law of motion,
+1 = () (1)
We assume 0 0, 00 ≤ 0. The random variable is i.i.d., independent of and,
∼ Θ () , = 0 1
We assume that Θ, the distribution function of , has support ⊆ R+ andmean () ∞ for all .
We consider ≥ 1 identical players. In period , each player ∈ 1 consumes ≥ 0 units of the available stock, and then a realization of the
7The standard Inada condition on the payoff (utility) functions and a simple condition
on the distribution function of the random production shock are sufficient to guarantee the
existence of solutions to the associated -period problem.
3
random shock takes place. Next period’s level of is given by
+1 =
à −
X=1
! (2)
Each player ∈ 1 maximizes his expected discounted utility over aninfinite-period horizon,
0
" ∞X=0
()
#
We assume : R+ → R is twice continuously differentiable with 0 0, 00 0.All players have the same state utility function, , and discount factor ∈ (0 1).We study Markov-Nash equilibria, i.e. equilibria with strategies of the form
= ()=1. The problem of player ∈ 1 can be expressed inBellman equation form as,
() = max≥0
⎧⎪⎨⎪⎩ () +
⎡⎢⎣⎛⎜⎝
⎛⎜⎝− −X=1 6=
()
⎞⎟⎠⎞⎟⎠⎤⎥⎦⎫⎪⎬⎪⎭ (3)
We focus on two issues: (a) the strategic behavior of players, in particular,
the “tragedy of the commons,” and (b) the effect of uncertainty, or change in
risk, on consumption strategies. We study the effects of “changing risk” using
the concepts of first- and second order stochastic dominance. We also compare
decisions made in the stochastic model with decisions made in a version of the
deterministic model in which the shock is always equal to its mean, () = .8
In the deterministic model, player ’s problem, given the strategies of all other
players is,
() = max≥0
⎡⎢⎣ () +
⎛⎜⎝
⎛⎜⎝− −X=1 6=
()
⎞⎟⎠⎞⎟⎠⎤⎥⎦ (4)
We distinguish between the stochastic model carrying the subscript “,” and
the deterministic model with subscript “”. Let S (resp. S) denote the Markovequlibrium strategies of the stochastic (resp. deterministic) game, i.e.
S =½©
∗ ()ª=1
¯ ∀ ∈ 1 ∗ () solves problem (3)
given () = ∗ () ∈ 1 with 6=
¾ and
S =½©
∗ ()ª=1
¯ ∀ ∈ 1 ∗ () solves problem (4)
given () = ∗ () , ∈ 1 with 6=
¾
8 See Hahn [9], Stiglitz [22] and Mirman [15].
4
With the assumptions imposed so far, it is not possible to analyze the complete
equilibrium sets, nor to give conditions for the existence of globally unique
equilibrium in either the stochastic or deterministic game. The presence of other
players’ strategies in problems 3 and 4 gives rise to various potential problems.
Suppose, for the moment, that the value function is twice continuously dif-
ferentiable.9 The first-order condition, from the dynamic program (3), is
0 () = 0 ()£ 0
( ())¤, with = − −
X=16=
() (5)
Using the envelope theorem on (3) yields,
0 () =
⎡⎢⎣1− X=16=
0 ()
⎤⎥⎦ 0 () £ 0 ( ())
¤ (6)
Combining (6) with (5),
0 () = 0 ()
⎡⎢⎣1− X=16=
0 ()
⎤⎥⎦ (7)
A crucial technical difficulty is revealed in (7). While (·) is strictly concave,the strategies (·) of the other players need not be convex, and the valuefunction (·) need not be concave. In fact, the possible non-concavity of thevalue function is not the only problem that may arise in this setup, from the
presence of the strategies of other players in a player’s value function. Mirman
[16] provides several examples of games using functions and , which meet
our general assumptions, but lead to value functions that are not concave, not
differentiable, not even continuous, and may admit multiple solutions.10 What
is revealing about the examples in Mirman [16] is that, if the same functions
and were used in a single-controller optimization problem, all the resulting
value functions would be twice continuously differentiable and strictly concave.
3 Games with a Markov-Nash equilibrium in
linear-symmetric strategies
Our parametric example in section 8 motivates our analysis. In that exam-
ple problems associated with the value function, as identified in section 2, are
avoided due to the linearity of the optimal symmetric strategies. Therefore, in
line with our parametric example, from this point onward we restrict attention
9This is not a valid assumption for the general problem, as we discuss below.10 Such problems do not preclude the existence of equilibria.
5
to games that deliver a symmetric Markov-Nash equilibrium in linear strategies,
i.e. games with Markov equilibrium strategies of the form
∗ () = , with ∈ (0 1) for all 0 all ∈ 1 . (8)
This class of games is non-empty. A well-studied model with a Markov-Nash
equilibrium in linear-symmetric strategies is the model of Levhari and Mir-
man [13], where () = ln (), () = , and ∈ (0 1).11 Our parametric
example in section 8 gives an extended pair of functions and that nests
the Levhari-Mirman example, and delivers an equilibrium in linear-symmetric
Markov strategies in the stochastic and corresponding deterministic game.12
For a game h Θi in this class of games,13 strategies (8) solve the dynamicprogram (3). The linearity of the optimal strategies allows us to avoid problems
associated with the non-differentiability of the value function. These strategies
satisfy conditions (5) and (6), and therefore condition (7), which becomes,
0 () = [1− ( − 1)]0 () (9)
Therefore, in this equilibrium 0 0, while 00 exists and is strictly negative.In other words, a Markov-Nash equilibrium with linear symmetric strategies en-
sures that is twice continuously differentiable, strictly increasing and strictly
concave. Notice that, since we focus on symmetric strategies, the index is
dropped from the value function.
Since models that deliver linear-symmetric Markov equilibrium strategies
are identified, we proceed with a general analysis for this class of models. First
we give conditions for the existence of a linear-symmetric equilibrium.
A game h Θi can have at most one linear-symmetric strategy in theequilibirum set S and similarly a game
®can have at most one linear-
symmetric strategy in the equilibirum set S Consider the stochastic case first.Fix 0, and consider (5) assuming (interior) linear symmetric strategies
() = . This necessary condition implies14
() = −0 ()+ 0 ((1−)) [ 0 ( ((1−)))] = 0 for all .
(10)
Similarly in the case of
® the necessary condition from (4) implies
11Levhari and Mirman [13] also examine cases of non-symmetric strategies by allowing the
discount factors of players to differ.12We believe that this equilibrium is globally unique but we provide no explicit proof of this
result, which in any case is not needed for our analysis.13 Since players are identical and we focus on state-dependent (Markov) strategies, it suffices
to denote a game only by h Θi for simplicity.14Notice that since (10) must be met for all 0 in the case of linear-symmetric strategies,
the function (·) does not depend on in equilibrium (i.e., when = ). Even if the left-
hand side of (10) depends on whenever (10) is not met with equality (i.e., when 6= ),
this potential dependence on does not affect our analysis, so we discard for the sake of
simplicity.
6
() = −0 ()+ 0 ((1−)) 0
¡ ((1−))
¢= 0 for all .
(11)
Notice that
0 () 0 and 0 () 0 for all ∈µ0
1
¶, (12)
so there can be at most one solution in each case.
In the games we study, the equilibrium strategies, and of the games
h Θi and ® respectively, are the unique solutions to:15 () = 0 and () = 0. (13)
While functions and are convenient to work with in order to characterize
properties of the Markov-Nash equilibrium in linear strategies, they do not pro-
vide the best way to verify the existence of a solution, since the value functions
themselves enter the expressions in 10 and 11. Theorem 1 gives necessary and
sufficient conditions so that there is precisely one such symmetric equilibrium
in linear strategies in the stochastic and the deterministic games respectively,
based on the data of the corresponding game.16
Theorem 1 Game h Θi has a unique Nash equilibrium in linear symmetricstrategies, if and only if there exists only one ∈ (0 1) such that
Φ ( ) = some ∈ R, for all 0 (14)
where
Φ ( ) = ()− 1− ( − 1)1−
[ ( ((1−)))]
and Θ is such that all players’ value functions are well-defined.17 Game
®has a unique Nash equilibrium in linear symmetric strategies if and only if there
exists only one ∈ (0 1) such thatΦ ( ) = some ∈ R, for all 0 (15)
15Amir [1] shows that for some games an equilibrium does not exist in the deterministic
case, while there is at least one equilibrium for the stochastic version of the same model. In
our extended example, presented in Section 8, linear-symmetric equilibrium strategies exist
in both the deterministic and the stochastic game.16For the approach followed in the proof of Theorem 1, see Chang [5], Xie [24], and Shimo-
mura and Xie [20]. Note that this result is not an equilibrium global uniqueness result.17Whether a player’s optimization problem is well-defined in a stochastic Markovian game
can depend on the nature of the shock. For example, if the support of the shock is unbounded,
conditions must be placed on the distribution of the shock in order to guarantee that value
functions of players exist. In the context of the general single-controller stochastic growth
model (which is the same as the model of Brock and Mirman [4]), Stachurski [21] identifies
a simple condition on the mean of some monotonic transformation of the random shock that
is sufficient to guarantee a well-behaved optimization problem and a well-defined long-run
stationary distribution. Unlike Stachurski [21], we do not provide such a condition for the
general game, but we do so in the context of our more specific analysis in Section 8.
7
where
Φ ( ) = ()− 1− ( − 1)1−
¡ ((1−))
¢and is such that all players’ value functions are well-defined.
Proof. For the necessity part in the case of the stochastic game, suppose game
h Θi has an equlibrium in linear-symmetric strategies, ∗ () = all ,©∗ ()
ª=1∈ S Consider the first order condition (5). By hypothesis this
becomes:
0 ()− [1− ( − 1)] 0 ((1−)) [0 ( ((1−)) )] = 0
(16)
Condition (16) holds for all 0. Therefore, integrating with respect to ,
yields the expression corresponding to Φ ( ) on the left hand side Hence
Φ ( ) = for some ∈ R, for all 0.
For the sufficiency part, assume that there is only one ∈ (0 1) suchthat Φ ( ) = for some ∈ R, for all 0, and let Θ be such that
the value function of each player is well-defined. Then, differentiating both
sides of Φ ( ) = with respect to leads to (16), which is the necessary
condition for a linear-symmetric optimum. Differentiating both sides of (9)
with respect to , yields 00 () = [1− ( − 1)]00 () 0, due to that ∈ (0 1). So, ∗ () = for all ∈ 1 is the only Nashequilibrium in symmetric linear strategies.
For the deterministic case suppose that game
®has an equilibrium
in linear-symmetric strategies, ∗ () = all ,n∗ ()
o=1∈ S The
necessary first order condition becomes:
0 () = [1− ( − 1)] 0 ((1−)) 0 ¡ ((1−))
¢. (17)
The remainder of the proof is analogous to that in the stochastic game and
is omitted.
In Section 8 we use Theorem 1 in order to identify appropriate functions ,
, Θ, and the level of in our parametric example. For the general results we
concentrate our analysis on the unique equilibrium in linear-symmetric Markov
strategies for games which satisfy the conditions of theorem 1. We use the short-
hand notation h Θi∞ () and
®∞
() to denote the game
and the equilibrium we are studying in the stochastic and deterministic game
respectively.
4 The tragedy of the commons and uncertainty:
a first look
Theorem 2 demonstrates a global result about the strategic behavior of play-
ers. We show that for all games h Θi∞ () and
®∞
() the
tragedy of the commons always holds.
8
Theorem 2 Suppose games h Θi and ® satisfy the conditions of the-orem 1 and consider h Θi∞ () and
®∞
() As the num-
ber of players, , increases, the aggregate exploitation rates, Ω ≡ and
Ω ≡ increase.
Proof. Consider h Θi∞ (). For any 0, a player’s necessary con-
dition (10) can be expressed as a function of the aggregate exploitation rate
Ω ≡ , as
Ψ (Ω ) = −0µΩ
¶+ (Ω) = 0, where
(Ω) = 0 ((1−Ω)) [ 0 ( ((1−Ω)))]
Given that 00 0, and 00 ≤ 0, 0 (Ω) 0. Applying the implicit function
theorem on the equilibrium condition
−0µΩ
¶+ (Ω) = 0,
we get
Ω
=
−00 ¡Ω¢
−00 ¡Ω¢+ 0 (Ω)
Ω
2 0, (18)
which proves the result. The argument for the deterministic case is the analo-
gous and is omitted.
Theorem 2 is sharp for the class of games we examine. It indicates that the
‘tragedy of the commons’ is robust for games of joint exploitation.18 However,
Theorem 2 does not address the effect of the introduction of uncertainty on
the tragedy of commons. Theorem 3 shows that uncertainty may increase or
decrease the equilibrium aggregate level of exploitation, by comparing the equi-
libria of pairs of games h Θi∞ () and
®∞
() Theorem 3
is similar to Theorem 2 in Mirman [15], adjusted to accommodate a symmetric
equilibrium in linear strategies.
Theorem 3 Suppose games h Θi and ® satisfy the conditions of the-orem 1 and consider h Θi∞ () and
®∞
() Then, for all
0
T ⇐⇒ [Λ ()] S Λ¡¢
(19)
where,Λ () ≡ 0 (), Λ () ≡ 0
() and ≡ ((1−)).
Proof. Fix 0. Given that both the deterministic and the stochastic strate-
gies are interior, ∈ (0 1), then () = () = 0. Since is
strictly increasing on (0 1) (see (12)),
T ⇐⇒ () S 0⇐⇒ () S () (20)
18Other studies that examine the issue of the tragedy of the commons include Dutta and
Sundaram [8], Sorger [18], [19], and Dockner and Sorger [6].
9
Moreover, from (10)
() = −0 ()+ 0 ((1−))
((1−)) [ ((1−))
0 ( ((1−)))]
or
() = −0 () + 0 ((1−))
[Λ ()] . (21)
Similarly, from (11),
() = −0 () + 0 ((1−))
Λ¡¢. (22)
Combining (21) and (22) with (20), (19) holds, for all 0.
Theorem 3 gives necessary and sufficient conditions for the direction of the
impact of uncertainty on consumption. However, these are based on character-
istics (relative curvature) of the two value functions and not on the primitives
of the games.19 In order to find the link to the primitives of the games, notice
that from (9)
0 () = [1− ( − 1)]0 () (23)
and
0 () = [1− ( − 1)]0 () . (24)
Hence
[Λ ()] = [1− ( − 1)] 0 ((1−)) [0 ( ((1−)) )]
(25)
and
Λ¡¢= [1− ( − 1)] 0 ((1−))
0 ¡ ((1−)) ¢. (26)
This connection, between the value functions and the primitives of the model
proves useful in identifying the role of the model’s primitives in the mechanics
behind the effects of uncertainty on the players’ strategies.
Some of our proofs in the remainder of the paper rely on the existence of a
well-behaved recursive mapping for calculating the equilibrium. This calculation
procedure is the solution technique suggested by Levhari and Mirman [13]. We
next present some key results about this procedure.
19Within the class of games we discuss, explicit value functions can be derived, thus making
the application of theorem 3 straightforward. In our parametric examples in Section 8, we
provide such explicit formulas for the value functions. The necessity part of theorem 3
proves useful in characterizing parameter choices that determine the effect of uncertainty on
strategies.
10
5 The Levhari-Mirman recursive procedure
Levhari and Mirman [13] start from the static, one period, symmetric equi-
librium, where the consumption rates of players are equal to 1 . They use
this strategy in order to form the value function and then they continue with
the two-period problem, calculate the symmetric strategies again, generalizing
the process to the -period problem. In general, if a linear symmetric strat-
egy exists for the -period problem, then a tractable recursive mapping on the
consumption rates () can be constructed, where () denotes the symmetric-
equilibrium consumption strategy of the -period problem. Characterizing the
evolution of (), as increases, is sufficient to characterize the evolution of the
linear-symmetric consumption functions in our class of models.
For = 2 3 , the -period problem of player ∈ 1 in Bellmanform, is
() () = max
≥0
⎧⎪⎨⎪⎩ () +
⎡⎢⎣ (−1)
⎛⎜⎝
⎛⎜⎝− −X=16=
() ()
⎞⎟⎠⎞⎟⎠⎤⎥⎦⎫⎪⎬⎪⎭ ,(27)
for the stochastic case, where () and
(−1) are the - and (− 1)-period
value functions of player , and () is the -period strategy of player . Simi-
larly, in the deterministic case, for = 2 3 , the -period problem of player
∈ 1 is
()
() = max≥0
⎡⎢⎣ () + (−1)
⎛⎜⎝
⎛⎜⎝− −X=16=
()
()
⎞⎟⎠⎞⎟⎠⎤⎥⎦ . (28)
Again, we focus on (interior) linear-symmetric Markov-Nash strategies. For
the stochastic game, ()∗
() = () for all 0, with
() ∈ (0 1)
all and for the deterministic game, ()∗
() = ()
for all 0, with
()
∈ (0 1) all .Assuming
()∗
() = () in 27, the necessary first order condition of the
(+ 1)-period problem of player ∈ 1 is,
−0 () + h1− ( − 1)()
i 0 ()
h0
³() ()
´i= 0 (29)
where = − −P=16=
() and () is the strategy of a player 6= .20
20This necessary condition is derived from (27) following the same steps as in the infinite-
horizon case, i.e., after applying the envelope theorem on (27). Notice that 29 does not restrict
the player to linear strategies in + 1.
11
Therefore, linear symmetric equilibrium strategies, (+1) , for the (+ 1)-
period problem, solve,21
Ψ³(+1) ()
´= 0, (30)
where
Ψ³ ()
´= () +
³ ()
´, (31)
with () = −0 (), and
³ ()
´=
h1− ( − 1)()
i 0 ((1−))
h0
³() ((1−))
´i
It follows that, due to the assumptions 00 0 and 00 ≤ 0,
Ψ1
³ ()
´ 0 , Ψ2
³()
´ 0 ∀ 0, ∈
µ01
¶ () ∈
µ01
¸.
(32)
Therefore, if (+1) , the solution to (30) exists, and lies in the open interval
(0 1), after applying the implicit function theorem to (30),
(+1)
()
= −Ψ2
³(+1)
()
´Ψ1
³(+1)
()
´ 0 , ∀ 0, () ∈µ01
¸. (33)
The same remarks hold for the deterministic case, for the deterministic game.
Given ()
, (+1)
is the solution to
Ψ³(+1)
()
´= 0, (34)
where
Ψ³
()
´= () +
³
()
´, (35)
with
³
()
´=
h1− ( − 1)()
i 0 ((1−)) 0
³()
((1−))´.
The two conditions, (30) and (34) define two recursive mappings.
Definition 4 Consider h Θi and ® The mapping : [0 1 ] →[0 1 ] is given by Ψ ( () ) = 0. The mapping : [0 1 ]→ [0 1 ]
is given by Ψ ( () ) = 0.
21As above, with function (·), (30) must be met for all 0 in the case of linear-
symmetric strategies, so the function Ψ (·) does not depend on in equilibrium (i.e., when the
expression given by (31) is evaluated at(+1)
()
, = 1 2 ). Even if the expression
given by (31) depends on whenever (30) is not met with equality (i.e., when this expression
is evaluated at some
()
, with 6=
(+1) , = 1 2 ), this potential dependence on
does not affect our analysis, so we discard for the sake of simplicity.
12
The following results provide a characterization for these two mappings. In
particular, Lemma 5 shows the importance of imposing an Inada condition on
the utility function for obtaining interior solutions (this Inada condition is a
sufficient condition).
Lemma 5 If lim→0
0 () =∞, for all (1) ∈ (0 1 ] and all (1) ∈ (0 1 ], thesequences
n()
o∞=2
generated by the mapping , andn()
o∞=2
generated
by , are such that () and
()
are unique with ()
()
∈ (0 1), =2 3 .
Proof. See Appendix.
Lemma 5 leads to a proof of the following result:
Theorem 6 Consider h Θi∞ () and
®∞
() and suppose
the one-period games have a symmetric equilibrium strategy, (1) =
(1)
= 1
If lim→0
0 () =∞, then starting from (1) =
(1)
= 1 , the recursive mappings
and are convergent with lim→∞
() = and lim
→∞()
= .
Proof. See Appendix.
Theorem 6 shows that the procedure that Levhari and Mirman [13] sug-
gested and implemented in their example, leads to recursive computability
of the infinite-horizon strategies for a more general class of games, i.e., the
class of games h Θi∞ () and
®∞
() which in addition
have a unique symmetric equilibrium in the stage game, and which satisfy
lim→0 0 () = ∞. While this result is interesting in its own right (as we haveidentified a reliable calculation procedure), some properties of the mappings
and are useful in the analysis of uncertainty. Lemma 7 states these
properties of and .
Lemma 7 Consider h Θi∞ () and
®∞
() and suppose the
one-period games have a symmetric equilibrium strategy, (1) =
(1)
= 1 If
lim→0
0 () =∞, then for any interval f = [ ] ⊆ (0 1 ],
() ≥ and () ≤ ⇒ ∈ f , ⇒ ∈ ( )
and
lim→∞
() = for all (1) ∈ f .
Moreover, for any interval f = [ ] ⊆ (0 1 ], () ≥ and () ≤ ⇒ ∈ f ,
⇒ ∈ ( ) ,and
lim→∞
()
= (1) ∈ f
13
Proof. See Appendix.
Lemma 7 is crucial for the comparisons that follow. Specifically, in com-
paring two distinct models (e.g. the stochastic and the deterministic, or two
models with different stochastic structures), we can view the solution to one
model as a starting point for calculating the solution to the other model. If this
starting point drives the necessary condition of the second model to be positive
or negative, then we can identify the direction in which the strategy must be
updated. The results in Lemma 7, yield a method for identifying where the
fixed point (infinite-horizon equilibrium) of the second model lies.
6 Uncertainty and strategic decisions
With Lemma 7 we are able to identify the primitive features of the model
that are responsible for the impact of uncertainty on strategies. For games that
satisfy the conditions of the lemma, we show that the result of Theorem 3 hinges
on features of the utility function, , alone, and not on . Of course, plays
an implicit role, since features of both and interact for the game to belong
to this class of games. Nevertheless, our results identify simple conditions on
that lead to specific effects of uncertainty on strategies.
In addition to analyzing the comparison between the stochastic and the
deterministic game, we study the effect of changes in risk under (i) second-order
stochastic dominance (SOSD) change, and, (ii) first-order stochastic dominance
(FOSD) change in the distribution of the shock. We find that, for the class of
games we study, only the structure of the utility function is needed to explain
the impact of changing risk on strategies. In particular, in the case of first-
order stochastic dominance, it is the coefficient of relative risk aversion that is
responsible for the effect of changes in risk on strategies.
6.1 The tragedy of commons and uncertainty: a second
look
Theorem 8 Suppose games h Θi and ® satisfy the conditions of the-orem 1 and consider h Θi∞ () and
®∞
() Further suppose
the one-period games have a symmetric equilibrium strategy, (1) =
(1)
= 1
If lim→0
0 () = ∞, then: (i) if and only if () is strictly convex, (ii)
if and only if () is strictly concave, and (iii) = if and only if
() is affine, where
() = 0 () (36)
Proof. See Appendix
Theorem 8 implies that, for the class of games with linear symmetric equi-
librium strategies, the model’s characteristics behind the result of Theorem 3
are given solely by a condition that pertains to the utility function, . This does
14
not mean that does not play any role in the linkup between uncertainty and
strategic behavior. Together with , the function is crucial for placing a game
h Θi in the class of games with linear strategies. The implication of Theorem8 is that, in this class of games, the effect of uncertainty on strategic behavior is
determined by a condition on the utility function alone. Moreover, notice that
Theorem 8 does not require that (·) be thrice continuously differentiable.22We provide two additional characterizations based on comparisons of games
using the notions of second- and first-order stochastic dominance.
6.2 Resource Exploitation and SOSD
We examine two games, h Θi andD Θ
E, that have linear-symmetric
equilibrium strategies, and , with shocks denoted by ∼ Θ () and ∼Θ³´, and such that one shock is riskier than the other, in the sense that
one distribution dominates the other with respect to Second Order Stochastic
Dominance. Recall:
Definition 9 Let two random variables, and ,defined on a common prob-
ability space, with both supports being subsets of ⊆ R+. Then second-order
stochastically dominates , or ¹ , if and only if
[ ()] ≥ h³´i
for all concave functions .
Theorem 10 provides conditions that dictate the effect of increasing risk on
strategic decisions.
Theorem 10 Suppose games h Θi andD Θ
Esatisfy the conditions of
theorem 1 and consider h Θi∞ () andD Θ
E∞
() Further sup-
pose the one-period games have a symmetric equilibrium strategy, (1) =
(1)
=
1 and lim→0
0 () =∞. If ¹ then: (i) if () is strictly convex, then
, (ii) if () is strictly concave, then , and, (iii) if () is
affine, then = , where () is as defined in 36.
Proof. See Appendix
6.3 Resource Exploitation and FOSD
We examine two games, h Θi andD Θ
E, that have linear-symmetric
equilibrium strategies, and , with shocks denoted by ∼ Θ () and ∼Θ³´, and such that one distribution dominates the other with respect to First
Order Stochastic Dominance. Recall:22For a discussion of this point see Mirman [15] (page 182) in his analysis of a two-period
problem.
15
Definition 11 Let two random variables, and , in a common probability
space, with both supports being subsets of ⊆ R+. Then first-order stochas-
tically dominates , or ¹ , if and only if23
[ ()] ≥ h³´i
for all non-decreasing functions .
Theorem 12 Suppose games h Θi andD Θ
Esatisfy the conditions of
theorem 1 and consider h Θi∞ () andD Θ
E∞
() Further sup-
pose the one-period games have a symmetric equilibrium strategy, (1) =
(1)
=
1 and lim→0
0 () =∞. If ¹ then:
0 () T 0 for all 0⇒ T . (37)
Proof. See Appendix
Remark 13
0 () T 0 0⇔ −00 ()
0 ()S 1 0
i.e. the sufficient condition (37) is tightly linked with the value of the coeffi-
cient of relative risk aversion.
For the remainder of the paper we study our results in the context of specific
parametric examples. We first review the Levhari-Mirman [13] example in the
presence of uncertainty. Then we present our extended example which shows
that the Levhari-Mirman example can, not only, be extended but also can be
shown to have a knife-edge property which is not always valid in the extended
example.
7 The stochastic Levhari-Mirman [13] model
Using the Levhari-Mirman [13] functions, namely () = ln () and () = ,
the value function of each player in the stochastic model is of the form,
() =
1− ln () + , (38)
whereas the value function in the deterministic case is,
() =
1− ln () + , (39)
where and are constants. So,
Λ () = Λ () =
1− for all 0 .
23Letting and be the distribution functions of and ,respectively, then ¹
if and only if () ≥ () for all ∈ .
16
Theorem 3 implies that = . In fact,
= =1−
(1− ) + . (40)
Therefore, the presence of uncertainty does not alter the rate of consumption
in the Levhari-Mirman [13] model. The difference between the stochastic and
the deterministic model is that in the stochastic case the state variable evolves
randomly and approaches a long-run stationary distribution. Consequently, the
payoffs of players are also random in each period.
With the aid of Theorem 3 we show that the Levhari-Mirman model is a
knife-edge case. We do this by extending the Levhari-Mirman model to a class of
models for which uncertainty affects strategies and then showing that it is only
in the Levhari-Mirman model that strategies are not affected by the presence of
uncertainty, i.e., the Levhari-Mirman model is a knife edge. This extension to a
more general “Great Fish War” model allow us to study the effect of uncertainty
on the strategies in a game with uncertainty. This task is undertaken below.
8 An extended example
Consider the case where,
() =1−
1 − 1
1− 1
, (41)
and
() =h1−
1 + (1− )1−
1
i −1
, (42)
with 0, ≥ 0 and ∈ (0 1]. Notice that for = 1, () = ln () and
() = 1−, i.e. the Levhari-Mirman model. A similar example, applied
to problems of Cournot oligopoly, has been presented in Koulovatianos and
Mirman [11].24
24For a game with linear symmetric strategies, take (41) and consider a general CES pro-
duction function
() =
1− 1 + (1− )
1− 1
−1
.
From the general first-order conditions of a game with linear strategies,
0 () = [1− ( − 1)] 0 ((1−))0 ( ((1−)) )
(see (5) and (9)). Using (41) and the CES production function,
− 1 = [1− ( − 1)]
1(1−)
− 1
− 1
1− 1
,
with ≡ ((1−)). Setting = is the only way to obtain linear strategies with these
functions. The single-controller version of this example with uncertainty (i.e. = 1), is
similar to this presented by Benhabib and Rustichini [3].
17
8.1 Existence of linear-symmetric equilibrium strategies
As in Stachurski [21], we identify a single sufficient condition on the mean of
the distribution of the transformed random variable, 1− 1
, in order that the
stochastic equilibrium be well-defined.
Proposition 14 If and are given by (41) and (42) respectively, and Θ is
such that,
³1−
1
´≡
1
and [ ()]
1− 1 ≡
1
, (43)
then satisfying,
=1
h(1−)
1 − (1−)
i, (44)
and satisfying,
=1
h(1−)
1 − (1−)
i, (45)
define unique linear-symmetric strategies that are Markov-Nash equilibrium strate-
gies of the infinite-horizon stochastic game and the deterministic game, respec-
tively.
Proof. Theorem 1, by simple substitution we can see that Φ ( ) takes the
form,
Φ ( ) =1−
1 (1−)
− 1
1− 1
n(1−)
1 − [(1− ( − 1))]
o1−
1+ () ,
where () does not depend on . Therefore, if (1−)1− [(1− ( − 1))] =
0 for a unique ∈ (0 1) then the sufficiency part of Theorem 1 can be ap-
plied. This condition is clearly equivalent to 44. By analogous argument, in the
deterministic case the sufficiency part of theorem 1 is satisfied if there is a unique
∈ (0 1) satisfying condition 45. Moreover, it is easy to verify that (44)is equivalent to Ψ ( ) = 0 and that (45) is equivalent to Ψ
( ) = 0.
Now it remains to show that and are unique, and also that the optimiza-
tion problem of each player is well-defined. From the two value functions,
() =
1− 1
1− (1−)1− 1
1−1 − 1
1− 1
+ , (46)
() =
1− 1
1− (1−)1− 1
1−1 − 1
1− 1
+ , (47)
18
where and are constants, it is immediate that if = Ω ∈ (0 1) and = Ω ∈ (0 1), then the problem of each player is well-defined. To show
that Ω ∈ (0 1), we examine two cases.Case 1: 1
Focusing on aggregate consumption rates, Ω ≡ , we express (44) as,
=Ω
(1−Ω) 1 − (1−Ω)≡ (Ω) . (48)
Here,
(0) = 1 and (1) =∞ ,
while
0 (Ω) =
1−(1− 1 )Ω
(1−Ω)1−1
− h(1−Ω) 1 − (1−Ω)
i2 0 , for all Ω ∈ (0 1) .
To see the last statement, notice that 1−³1− 1
´Ω ≥ (1−Ω)1− 1
for all Ω ∈[0 1), with equality if and only if Ω = 0. So, applying the intermediate-value
theorem to (48) shows that Ω ∈ (0 1), a unique symmetric equilibrium in linearstrategies. This case is depicted in Figure 1. For Ω ∈ (0 1) and uniqueness,replace with in the above argument.
Case 2: 1
For this case it is useful to express (44) as,
Γ (Ω) ≡
Ω=
(1−Ω) 1 − (1−Ω)≡ Ξ (Ω) . (49)
Where Γ (0) =∞ and Ξ (0) = (1− ), and
Ξ (Ω) 0 if Ω ∈h0 1− ()
1−i.
Due to (43) this interval is nonempty. Moreover,
Ξ0 (Ω) =
1(1−Ω) 1−1 − h
(1−Ω) 1 − (1−Ω)i2 0 for all Ω ∈
h0 1− ()
1−i
as 0 (1−Ω) 1 − (1−Ω) h1(1−Ω) 1−1 −
i(1−Ω) for all Ω ∈h
0 1− () 1−i. Finally,
Ξ³1− ()
1−´=∞ .
Given that Γ³1− ()
1−´∞ and Γ0 (Ω) 0 for all Ω ∈
h0 1− ()
1−i,
it follows that Ω ∈³0 1− ()
1−´⊂ (0 1), and it is unique. This is shown
19
in Figure 2. For Ω ∈ (0 1) and the uniqueness of it replace with in the
above argument.
For the last case of = 1, see Section 7 above, to verify that (40) satisfies
both (44) and (45), and also that Ω = Ω ∈ (0 1).Note that Proposition 14 shows that the Levhari-Mirman model is indeed a
knife edge case for = 1 in our extended example. According to our analysis
in Section 7, in the case of = 1 of the extended example, uncertainty plays no
role.25 However, for other values of , uncertainty plays a major role .
Clearly our paremetric example satisfies the conditions identified in Section
5, i.e. lim→0
0 () = ∞, and also the one-period games have a unique symmetricequilbrium strategy,
(1) =
(1)
= 1 Furthermore, substituting (41) and
(42) into the necessary conditions of the finite-horizon problem given by (30)
and (34), we find,
() =1
+ () [1−(−1)]
(50)
() =1
+¡
¢ [1−(−1)]
(51)
It is straightforward to verify from (50) and (51) that () () ∈ (0 1)for all ∈ (0 1 ]. Therefore, we can apply Theorems 3, 8, 10 and 12 to ourmodel.
8.2 Impact of uncertainty on strategies
We can apply Theorem 3 as follows. Let = ((1−)), and recall the
definitions of Λ and Λ given in Theorem 3. Notice that
[Λ ()] =
1− 1
1− (1−)1− 1
1− 1
. (52)
while
Λ¡¢=
1− 1
1− (1−)1− 1
1− 1
. (53)
From (44),
1− 1
1− (1−)1− 1
=
µ1
−
¶ 1
,
so substituting this expression into (52),
[Λ ()] =
µ1
−
¶ 1
1− 1
. (54)
25Notice also that, for = 1, (46) and (47) yield (38) and (39).
20
After using (45), as was done with (44), (53) becomes,
Λ¡¢=
µ1
−
¶ 1
1− 1
. (55)
Proposition 15 identifies the parameters that are behind the comparison given
in Theorem 3.
Proposition 15 If and are given by (41) and (42), and Θ is such that
(43) is met, then
S 1⇔ S .
Proof. By applying the implicit function theorem to (48) and (49) and with
the aid of Figures 1 and 2, it follows that
= () and = ¡¢,
where 0 () 0 for all ∈ (0 1 ()). So,
S ⇔ ³1−
1
´T [ ()]1−
1 ⇔ S 1 , (56)
as the relationship between and 1 determines whether the function () =
1−1 is concave, convex or affine (in the present setting, when = 1, () is
equal to unity for all 0). The result follows from Jensen’s inequality.
In our example,
() = 0 () = 1−1 ⇒
(0 () T 0⇔ T 100 () S 0⇔ T 1 . (57)
Since is thrice differentiable, the concavity of can be examined through
the sign of its second derivative. Notice that, (57) illustrates the connection
between Theorems 8, 10 and 12 and Theorem 3, as well as, with the result of
Proposition 15. Most importantly, it shows how the properties of can affect
the role of uncertainty on strategies.
8.3 Changes in risk and the tragedy of the commons
In this subsection we investigate whether increasing risk has an impact on the
rate at which aggregate exploitation rates increase with the number of players.
This investigation requires a comparison of aggregate exploitation rates along
two dimensions, the number of players and the degree of riskiness. In perform-
ing such a comparison, if we employ two random variables, and , with a
discrete difference in the degree of riskiness, the comparison becomes unclear,
as the initial aggregate exploitation rates can also be discretely different before
increasing the number of players. For this reason, we employ a concept of chang-
ing risk that allows for marginal increases in riskiness. Consider a lognormally
distributed shock,
ln () ∼
µ− 2
2 2¶. (58)
21
The expectation of is, () = , whereas its variance is, () = 2³
2 − 1´,
i.e., the parameter has an impact only on the variance of and not on its
mean (but the parameter has an impact on both the mean and the variance
of ). So, any two distributions given by (58) with different values of parameter
are linked through second-order stochastic dominance (increase in risk). In
particular, if ∼ Θ () with parameters ( ) and ∼ Θ³´with parameters
( ), then implies that ¹ ( represents an increase in risk from
).
Proposition 16 examines the impact of increases in risk on the intensity of
the tragedy of the commons.
Proposition 16 If and are given by (41) and (42), Θ obeys (58) and
condition (43) is met, then, (i) the tragedy of the commons is amplified by an
increase in riskiness if and only if 1, (ii) the tragedy of the commons is
mitigated by an increase in riskiness if and only if 1, (iii) the tragedy of the
commons is unaffected by an increase in riskiness if and only if = 1.
Proof. See Appendix
Proposition 16 shows that, if, 1, the overexploitation tendency is mit-
igated as both riskiness and the number of players increase. Note, however,
Theorem 12 states that when 1, for a fixed number of players, all play-
ers would tend to increase their consumption rates as uncertainty increases.
These results indicate the complex, yet interesting, strategic behavior in Nash
equilibrium outcomes.
9 Appendix
Proof. [Lemma 5] Fix any 0 and any () ∈ (0 1 ]. Then, from (31),
Ψ³
()
´= () + () where
() 0 and () 0 for all ∈ (0 1) , andlim→0
() = −∞ and lim→1
() =∞
so Ψ³
()
´intersects the zero axis within the interval (0 1) at least
once. Yet, the fact that, for all ∈ (0 1), 0 () 0 and 0 () 0, impliesthat Ψ
³
()
´= 0 has a unique solution, (+1) ∈ (0 1). The same
argument can be used for the deterministic analogue of the stochastic model.
Proof. [Theorem 6] Fix 0 and consider the stochastic game. Using the
same argument as in the proof of Lemma 5, starting from (1) there exists a
unique (2) ∈ (0 1), i.e. (2)
(1) . Moreover,
(2) (1) =1
. (59)
22
In order to show (59), suppose that (2) ≤ . Since Ψ
1
³(+1)
()
´ 0,
Ψ2
³(+1)
()
´ 0, and Ψ ( ) = 0, 0 = Ψ
( ) Ψ³
(1)
´≥
Ψ³(2)
(1)
´, which contradicts that Ψ
³(2)
(1)
´= 0. From Lemma 5,
(1) = 1 gives rise to a unique sequence
n()
o∞=1
that is generated from
(30). Given that ∈ (0 1) is unique, and the mapping is a continuous
function with 0
³()
´= − Ψ
2(
(+1) () )
Ψ1
(+1)
()
0 for all 0, (59) and the
intermediate value theorem imply that (+1)
() , = 1 2 . There-
fore the sequencen()
o∞=1
converges and lim→∞
() = The same argument
holds for the deterministic analogue of the stochastic model.
Proof. [Lemma 7] Fix any 0 and let f = [ ] ⊆ (0 1 ] with () ≥ and () ≤ . Since is a continuous function with
0
³()
´= − Ψ
2(
(+1) () )
Ψ1
(+1)
()
0 for all 0, the intermediate value theorem
implies that ∈ f, since is by assumption unique. As () ≥ ,
strict equality holds only if = . If () , then From
Lemma 5, any (1) ∈ (0 1 ] gives rise to a unique sequence
n()
o∞=1
, so
()
(+1) , = 1 2 , for all
(1) ∈ [ ). Thus, is stable for
all (1) ∈ [ ). By a similar argument, if () ,
()
(+1) ,
= 1 2 , for all (1) ∈ ( ], so is stable for all
(1) ∈ ( ]. Finally,
for () ≤ , equality holds only if = , completing the proof. The
same argument can be used for the deterministic analogue of the stochastic
model.
Proof. [Theorem 8] Fix any 0. Ψ¡ ()
¢and Ψ
¡()
¢can be
expressed as,
Ψ³ ()
´= −0 () (60)
+
£1− ( − 1)()¤
() 0 ((1−))
((1−))h³() ((1−))
´iand
Ψ³ ()
´= −0 () (61)
+
£1− ( − 1)()¤
() 0 ((1−))
((1−))³() ((1−))
´From (60), (61), and Jensen’s inequality,
(a) Ψ¡ ()
¢ Ψ
¡ ()
¢for all () ∈ (0 1), if and only if, (·)
is strictly convex,
23
(b) Ψ¡ ()
¢ Ψ
¡ ()
¢for all () ∈ (0 1), if and only if, (·)
is strictly concave,
(c) Ψ¡()
¢= Ψ
¡ ()
¢for all () ∈ (0 1), if and only if, (·)
is affine.
So, in case (a), Ψ ( ) Ψ ( ) = 0, so () . In the proof
of Theorem 6 it is shown that (1) 1 . Then, by Lemma 7, ∈( 1), which proves statement (i). In case (b), Ψ
( ) Ψ ( ) =
0, so () . Since (1) 1 (from the proof of Theorem 6),
Lemma 7 implies that ∈ ( 1), which proves statement (ii). Finally,statement (iii) is straightforward.
Proof. [Theorem 10] Fix any 0. The necessary conditions of the two
problems, Ψ¡ ()
¢and Ψ
¡ ()
¢, can be expressed as,
Ψ³ ()
´= −0 () (62)
+
£1− ( − 1)()¤
() 0 ((1−))
((1−))h³() ((1−))
´iand
Ψ³ ()
´= −0 () (63)
+
£1− ( − 1)()¤
() 0 ((1−))
((1−))h³() ((1−))
´i.
Using the expressions (62) and (63), and Definition 9,
(a) if (·) is strictly convex, then Ψ ¡ ()¢ Ψ ¡ ()¢ for all () ∈(0 1),
(b) if (·) is strictly concave, then Ψ ¡ ()¢ Ψ ¡ ()¢ for all () ∈(0 1),
(c) if (·) is affine, then Ψ ¡ ()¢ = Ψ ¡ ()¢ for all () ∈ (0 1).So, in case (a), Ψ ( ) Ψ
( ) = 0, so () . In the proof
of Theorem 6 it was shown that (1) 1 . By Lemma 7, ∈ ( 1),which proves statement (i). In case (b), Ψ ( ) Ψ ( ) = 0, so
() . Since (1) 1 (see the proof of Theorem 6), Lemma
7 implies that ∈ ( 1), which proves statement (ii). Finally, statement(iii) is straightforward.
Proof. [Theorem 12] Integration by parts yields,
[ ()]−h³´i=
Z
hΘ ()−Θ ()
i0 ()
for all differentiable functions . So, setting () = ¡() ((1−))
¢,
for any () ∈ (0 1 ] and any ∈ (0 1), the fact that ¹ implies,
0 () T 0 ∀ 0⇒ h³() ((1−))
´iS
h³() ((1−))
´i.
(64)
24
Based on (64), (62), and (63),
(a) 0 () 0 for all 0⇒ Ψ ( ) Ψ ( ) = 0,(b) 0 () 0 for all 0⇒ Ψ ( ) Ψ ( ) = 0,(c) 0 () = 0 for all 0 ⇒ Ψ
¡ ()
¢= Ψ
¡ ()
¢= 0 for all
() ∈ (0 1).The rest of the proof follows exactly as in Theorem 10.
Proof. [Proposition 16] We denote the aggregate exploitation rate as
Ω () = (). Equation (18) implies (after some algebra) that
Ω ()
=
() () ()−1
h1− −1
() ()
−1i . (65)
where
() ≡h
³ ()
1− 1
´i,
and
() ≡ [1− ( − 1) ()] .Equation (65) can be re-written as,
Ω()
Ω ()=
ln [Ω ()]
=
() ()−1
2
h1− −1
() ()
−1i . (66)
Taking the partial derivative of this last expression with respect to , yields,
2 ln [Ω ()]
=
0 () ()−2h () + ( − 1) () 0()
0()
i2
h1− −1
() ()
−1i2 . (67)
Since,0 () 0 ()
= ()
(),
we find () (), by applying the implicit function theorem to (44). In
particular, (44) can be written as,
− 1
() ()+1
− () = 0 , (68)
so,
0 () 0 ()
=−1
()
1− −1
() ()−1 . (69)
After substituting (69), the expression () + ( − 1) () 0()0() on the RHS of
(67), becomes,
() + ( − 1) () 0 () 0 ()
= ()− −1
() ()
1− −1
() ()−1 .
25
Yet, (68) implies that, ()− −1
() ()= 1
, so,
() + ( − 1) () 0 () 0 ()
=1
h1− −1
() ()
−1i ,
and (67) gives,
2 ln [Ω ()]
=
0 () ()−2
3
h1− −1
() ()
−1i3 . (70)
As we have found in Theorem 2,
Ω ()
=
() () ()−1
h1− −1
() ()
−1i 0⇒ 1− − 1
() ()
−1 0 ,
so, (70) implies that,
2 ln [Ω ()]
T 0⇔ 0 () T 0⇔ S 1 ,
which proves the Proposition.
References
[1] Amir, Rabah (1996), “Continuous Stochastic Games of Capital Accumu-
lation with Convex Transitions,” Games and Economic Behavior, 15, 111-
131.
[2] Benhabib, J. and R. Radner (1992), “The joint exploitation of a productive
asset: a game-theoretic approach,” Economic Theory 2 , 155-190.
[3] Benhabib, J. and A. Rustichini (1994), “A note on a new class of solutions
to dynamic programming problems arising in economic growth, Journal of
Economic Dynamics and Control, 18, 807-813.
[4] Brock, W. A., and L. J. Mirman (1972), Optimal Economic Growth and
Uncertainty: The Discounted Case, Journal of Economic Theory, 4, 479-
513.
[5] Chang, F. R. (1988), “The inverse optimal problem: A dynamic program-
ming approach,” Econometrica 56, 147-172.
[6] Dockner, E. J., and G. Sorger (1996), Existence and Properties of Equilibria
for a Dynamic Game on Productive Assets, Journal of Economic Theory,
71, 209-227.
26
[7] P. K. Dutta, R. K. Sundaram (1992), Markovian Equilibrium in a Class of
Stochastic Games: Existence Theorems for Discounted and Undiscounted
Models, Econ. Theory 2, 197-214.
[8] P. K. Dutta, R. K. Sundaram (1993), How Different Can Strategic Models
Be? J. Econ. Theory 60, 42-61.
[9] Hahn, F. H. (1969), Savings and Uncertainty, Review of Economic Studies,
37(1), 21-24.
[10] Hanoch, G. and H. Levy (1969): “The efficiency Analysis of Choices In-
volving Risk,” The Review of Economic Studies, 36, 335-346.
[11] Koulovatianos, Christos and Leonard J. Mirman (2007): “The Effects of
Market Structure on Industry Growth: Rivalrous Non-excludable Capital,”
Journal of Economic Theory, 133(1), 199-218.
[12] Levhari, David, Ron Michener, and Leonard J. Mirman (1981): “Dynamic
Programming Models of Fishing: Competition,” American Economic Re-
view, 71(4), 649-661.
[13] Levhari, David and Leonard J. Mirman (1980): “The Great Fish War: an
Example using a Dynamic Cournot-Nash Solution,” The Bell Journal of
Economics, Volume 11, Issue 1, pp. 322-334.
[14] Lippman, Steven A, and John J. McCall (1981): “The Economics of Uncer-
tainty: Selected Topics and Probabilistic Methods,” Handbook of Mathe-
matical Economics, Vol 1, Ch. 6, 211-284.
[15] Mirman, Leonard J. (1971): “Uncertainty and Optimal Consumption De-
cisions,” Econometrica, 39, 179-185.
[16] Mirman, Leonard J. (1979): “Dynamic Models of Fishing: A Heuristic
Approach,” Control Theory in Mathematical Economics, Liu and Sutinen,
Eds.
[17] Mirman, Leonard J. and Daniel F. Spulber (1985): “Fishery Regulation
with Harvest Uncertainty,” International Economic Review, 26, 731-746.
[18] G. Sorger (1998): “Markov-perfect Nash equilibria in a class of resource
games,” Economic Theory, 11, 79-100.
[19] G. Sorger (2005): “A dynamic common property resource problem with
amenity value and extraction costs,” International Journal of Economic
Theory, 3-19.
[20] Shimomura, K., Xie D. (2008): “Advances on Stackelberg open-loop and
feedback strategies,” International Journal of Economic Theory, 4, 115-133.
[21] J. Stachurski (2002): “Stochastic Optimal Growth with Unbounded
Shock,” Journal of Economic Theory, 106, 40-65.
27
[22] Stiglitz, J. E. (1970), A Consumption Oriented Theory of the Demand for
Financial Assets and the Term Structure of the Interest Rates, Review of
Economic Studies, 37, 321-351.
[23] R. Sundaram (1989), Perfect equilibrium in a class of symmetric games, J.
Econ. Theory 47, 153-177.
[24] Xie, D. (2003), “Explicit transitional dynamics in growth models,” in H.
Zhang, and J. Huang, eds, Recent Advances in Statistics and Related Top-
ics, 140-155, Singapore: World Scientific.
28