using survey-based risk tolerance miles kimball, claudia sahm and matthew shapiro

54
Using Survey-Based Risk Tolerance Miles Kimball, Claudia Sahm and Matthew Shapiro

Upload: muriel-oliver

Post on 17-Dec-2015

214 views

Category:

Documents


0 download

TRANSCRIPT

Using Survey-Based Risk Tolerance

Miles Kimball, Claudia Sahm and Matthew Shapiro

Global Theme

• Theoretical concepts are often measured imperfectly.

• When using an imperfect measure, it is essential to correct for measurement error. Researchers usually fail to do this and make improper inferences.

• Example from a different research project: controlling for past religiosity when looking to see

whether college major affect religiosity. It leads to misleading results (exaggerating the effect of college major) to just add the best available measure of past religiosity to OLS regression.

Risk Aversion and Risk Tolerance

• Think in terms of a von Neumann-Morgenstern expected utility function

E (1-γ)-1 W1-γ

where W is wealth, γ is (constant) relative risk aversion (RRA) and its reciprocal

θ=γ-1 is relative risk tolerance (RRT). • Sometimes it is most convenient to think in

terms of RRA γ and sometimes it is most convenient to think in terms of RRT θ.

The Need for a Measure of Risk Preferences

• Unobserved heterogeneity in preferences can lead to biased coefficients, destroying identification.

• E.g., think of regressing, with constant δ

E Δ C = a + b r + (1+δ) E (Δ C)2/2 + e

(Dynan JPE 1993) when the true equation is

E Δ C = a + b r + (1+γ) E (Δ C)2/2 + ε

With γ negatively correlated with E (ΔC)2/2.

The Bias

• e = ε + [γ-δ] E (Δ C)2.

• Since relative risk aversion γ is negatively correlated with E (Δ C)2/2, e will be negatively correlated with E (Δ C)2/2.

• Therefore, the estimate of δ in the first equation will be biased downward.

• Indeed, Dynan (1993) found an estimate implying a zero coefficient for E (Δ C)2/2.

Evidence that the Bias is Large

• With no direct control for risk preferences, “Precautionary Saving and Self-Selection—Evidence from the German Reunification ‘Experiment’” by Matthias Schuendeln and Nicola Fuchs-Schuendeln QJE 2005 finds that former East Germans –whose risk preferences would have had less effect on their occupations--exhibited a strong precautionary saving effect, while those who grew up in W. Germany did not.

Controlling for Risk Preferences

• One of the big motivations for measuring RRT is to “control for risk tolerance” in estimation.

• Clearly, one could control for RRT if one had a perfect measure of θ. Not a likely event!

• We show how to control for risk tolerance and obtain consistent estimates even with an imperfect measure.

We also show how to estimate:

• average risk tolerance

• the difference in average risk tolerance between groups.

• the overall variance in risk tolerance across the population.

• the effects of risk tolerance on behavior.

Obstacles to be Confronted

• Data that only give a range for risk tolerance.

• Response error.• Data that limit the number of respondents

with multiple observations on risk tolerance. -HRS Wave I 100% sample-HRS Wave II 20% sample

-PSID: one observation per respondent.

The HRS Wave I and II Risk Tolerance Question

• “Suppose that you are the only income earner in the family, and you have a good job guaranteed to give you your current (family) income every year for life. You are given the opportunity to take a new and equally good job, with a 50-50 chance it will double your (family) income and a 50-50 chance it will cut your (family) income by a third. Would you take the new job?

Follow-up Questions

• If the respondent would take the new job:“Suppose the chances were 50-50 that it would

double your “family” income and 50-50 that it would cut it in half. Would you still take the new job?

• If the respondent would not take the new job:“Suppose the chances were 50-50 that it would

double your (family) income and 50-50 that it would cut it by 20 percent? Would you then take the new job?

Advantages of a Quantitative Measure

• Makes it possible to construct a sensible one-dimensional cardinal proxy.

• Linked to economic theory:

Sharerisky = θ E[Retrisky-r]/Var(Retrisky).

• Enables us to be serious about measurement error.

Contrast with the Qualitative Questions on the SCF

“Which of the statements comes closest to the amount of financial risk that you and your (spouse/partner) are willing to take when you save or make investments?”

1. take substantial financial risks expecting to earn substantial returns.

2. take above financial risks expecting to earn above average returns.

3. take average financial risks expecting to earn average returns.

4. not willing to take any financial risk.

Problems with the SCF Questions

1. Because they are not quantitative, it is hard to know how they map into the economic theory.

2. Different respondents may interpret words like “substantial” and “not willing to take any” differently.

3. It is clear these questions have measurement error, but it is not easy to tell how much.

A More Subtle Problem with the SCF Risk Preference Questions

4. They are awfully close to asking people what kind of portfolio they actually have.

• Thus, there is a serious danger the answers could be influenced by the respondent’s actual portfolio, including those aspects of the actual portfolio that are a historical accident rather than a function of risk preferences.

• Formally, the measurement error for these questions may be correlated with the behavior we want to explain.

Constructing a Cardinal Proxy for Risk Tolerance

• Find the range of risk tolerance values consistent with a set of responses in the absence of response error.

• Use Maximum Likelihood Estimation (MLE) to estimate a lognormal distribution for risk tolerance in the absence of response error to develop intuition.

• Use repeated observations to estimate the variance of response error and to adjust other estimates for response error.

• Construct a cardinal proxy as E(θ|c) where c is the answers to the hypothetical questions above.

Bounding Risk Tolerance in the Absence of Response Error

• Cutpoint for “double or cut by a third”:

.5[2W]1-γ/(1-γ) +.5 [.667W]1-γ/(1-γ)=W1-γ/(1-γ)

• Dividing through by W1-γ/(1-γ):

.5 [21-γ + .6671-γ] = 1.

• γ=2 solves the equation: .5 [.5 + 1.5] = 1.

• Therefore, in the absence of response error, taking the risky job means γ≤2 and θ≥.5; staying safe means γ≥2 and θ ≤.5.

Measurement Error

• High cognitive demands lead to survey response error.

• The fact that we only have ranges generates additional measurement error.

• Uncorrected, this leads to errors-in-variables problems– coefficient of RRT likely to be too small– coefficients of other variables are biased as

they pick up some of the effects of RRT.

Modeling Response Error

• x = ln(θ)~N(μ,σx2)

• xw= x + εw ~ N(μ,σx2 + σε

2) ~ N(μ,σ2)• εw ~N(μ,σε

2) is the transitory response error– transitory fluctuation in perception of own RT– transitory error in calculation of the bounds

• ε is assumed not to affect real-life decisions.– This assumption (made below) could be contested. – It would matter if exp(x+ε) (instead of exp(x)=θ) governs

behavior, since even if an entirely different realization of ε governs actual behavior, the effect of ε on exp(x+ε) is nonlinear and raises exp(x+ε) on average.

Probabilities of Falling into a Category Given Response Error

Likelihood Function

E(θ|c)

Advantages of the Cardinal Proxy h=E(θ|c)

• One-dimensional, quantitative measure that flexibly summarizes the details of the information we have about risk tolerance– ranges– response error– multiple observations for some respondents,

single observations for others• Univariate OLS regressions with the cardinal

proxy as an independent variable are unbiased.

Limitations of the Cardinal Proxy h=E(θ|c)

• Captures only about 20% of the total variance of underlying risk tolerance.

λ = Var(θ)/Var(h) ≈ 5• OLS regressions of h on demographic

variables underestimate group differences by roughly this factor of 5.

• Multivariate OLS regressions with h as an independent variable are biased and underestimate the contribution of θ to R2.

Because λ is known, we can still get consistent estimates.

• People often talk as if having an imperfect measure inevitably leads to inconsistent estimates.

• But it is possible to correct for measurement error, if it has known characteristics.

A Nonstandard Errors-in-Variables Problem

• θ = h+u = E(θ|c)+u

• h ┴ u [E(u|h)=0]

• Contrast with classical errors-in-variables problem: – Note that h = θ – u. – In a classical errors-in-variables problem

θ ┴ -u, or equivalently θ ┴ u . – But in this case Cov(θ,u) = Var(u) > 0.

The Underlying Structural Model

• y = θδθ + zδz + ν (everything demeaned)

• E(v|θ,z,ε) = 0

• y = risky asset share

• z = sex, education, age, race, log(income), log(net worth)

• Note the assumption that the vector of response errors ε is ignorable (redundant) in the underlying structural model.

Substituting h for θ: z is correlated with the unobserved part of RRT

• y = hδθ + zδz + η

• η = (θ-h)δθ + v = uδθ + v

• E(η|h) = 0

• But E(η|z) = δθ E[(θ-h)|z] ≠ 0

Assumptions Behind the Measurement Error Correction

1. z = θβ + ζ• z is linear in its relationship to true risk

tolerance• more generally, z could be a linear

combination of a small set of specific functions of risk tolerance (< # categories)

2. E(ζ|θ,ε)=0• ζ is uncorrelated to the response error• E(ζ|h)= E(ζ|f(θ,ε))= 0

Adjusting Covariances for Measurement Error

• Since E(ζ|θ)=0 and E(ζ|h)=0, β in the equation z = θβ + ζ is both the population OLS estimate and the population IV estimate using h as an instrument:

β = Cov(θ,z)/Var(θ) = Cov(h,z)/Cov(h,θ)• Cov(θ,z)=[Var(θ)/Cov(h,θ)] Cov(h,z)• Var(θ) = λ Var(h)• Cov(h,θ) = Cov(h,h+u) = Var(h)

• Cov(θ,z)= λ Cov(h,z) (λ≈5)

Method of Moments

• E(hη)=0

• E(z’ω)=0

• η = y - hδθ –zδz

• ω = y - λhδθ –zδz

Implied R2 if θ were observed

Persistent response error: xw=x+κ+εw; τ=σx

2/[σx2+σκ

2]=.5

Looking for the Effect of z on θ

• Suppose if we had a perfect measure of θ, we would want to know the OLS estimate of b in the equation θ = z b + ξ.

• If h is the closest we can get to θ, one might be tempted to estimate h = z d + υ using OLS.

• Under the assumptions above, b = Cov(z,θ)/Var(z) = λ Cov(z,h)/Var(z) =λd

Is there really that big a difference between men and women in RRT?

Men Women

μ -1.971 -1.948 (.039) (.032)

σx 1.007 1.047 (.119) (.079)

σε 1.517 1.292 (.080) (.063)

Results of the Unrestricted MLE

• Men and women look alike in the mean and variance of their true risk tolerance.

• However, the difference in σε is significant (t=2.2). Women have less response error. (They may answer the questions more carefully than men.)

• The higher response error generates more apparently risk-tolerant responses for men.

What went wrong in Table 13?

• We assumed z unrelated to ε. (E(ζ|θ,ε)=0, where z = θβ + ζ)

• This assumption is violated: If men have larger response errors, ε2 is correlated with being male.

• This variation can be handled by1. Defining h=E(θ|c,s), where s= sex.2. Giving a special role to s in the second-step

corrections.

What if a separate transitory error ψ affects behavior?

• Suppose y = (ex+ψ)δθ + zδz + ν

• ψ┴ε since ψ is a different transitory error

• Then the appropriate cardinal proxy is h*=E(ex+ψ|c) =Eeψ E(ex|c)=ΨE(θ|c)=Ψh,

a constant Ψ= Eeψ times h.

• Also, use λ* in place of λ, where

λ*= λ[exp(σx2+σψ

2)-1]/ [exp(σx2)-1].

Otherwise, the procedure is the same.

What if a permanent error (distinct from κ) affects behavior?

• Note that we implicitly assumed that κ did not affect behavior.

• If there is a piece of the permanent error that does affect behavior, it is observationally indistinguishable from any other component of θ.

• In other words, if it acts like a component of true risk aversion in all respects, then it might as well be a part of risk aversion.

• This logic makes the effective permanent error variance smaller since it is limited to the permanent errors that affect only survey responses and not real-life behavior.

Controlling for Risk Preferences

• One of the big motivations for measuring RRT is to “control for risk tolerance” in estimation.

• Clearly, one could control for RRT if one had a perfect measure of θ. Not a likely event!

• We show how to control for risk tolerance and obtain consistent estimates even with an imperfect measure.

Global Theme

• Theoretical concepts are often measured imperfectly.

• When using an imperfect measure, it is essential to correct for measurement error. Researchers usually fail to do this and make improper inferences. But it is often possible to correct for these problems by taking enough care.