regression with a binary dependent variable. linear probability model probit and logit regression...

61
Regression with a Binary Dependent Variable

Upload: jonah-simmons

Post on 13-Dec-2015

278 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Regression with a BinaryDependent Variable

Page 2: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Linear Probability Model Probit and Logit Regression

Probit Model Logit Regression

Estimation and Inference Nonlinear Least Squares Maximum Likelihood Marginal Effect

Application Misspecification

Page 3: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

So far the dependent variable (Y) has been continuous: district-wide average test score traffic fatality rate

What if Y is binary? Y = get into college; X= father’s years of

education Y = person smokes, or not; X = income Y = mortgage application is accepted, or not;

X=income, house characeteristics, marital status, race.

Page 4: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Example: Mortgage denial and race

The Boston Fed HMDA data set Individual applications for single-family mortgages

made in 1990 in the greater Boston area. 2380 observations, collected under Home

Mortgage Disclosure Act (HMDA).Variables Dependent variable:

Is the mortgage denied or accepted? Independent variables:

income, wealth, employment status other loan, property characteristics race of applicant

Page 5: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The Linear Probability Model

A natural starting point is the linear regression model with a single regressor:

But, What does mean when Y is binary? Is

? What does the line mean when Y is

binary? What does the predicted value mean when Y

is binary? For example, what does = 0.26 mean?

Page 6: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Recall assumption #1: E(ui |Xi ) = 0, so

When Y is binary,

so

Page 7: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

When Y is binary, the linear regression model,

is called the linear probability model. The predicted value is a probability:

= probability that Y=1 given x.

= the predicted probability that Yi = 1, given X.

= change in probability that Y = 1 for a given x:

Page 8: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Example: linear probability model, HMDA data

Mortgage denial v. ratio of debt payments to income (P/I ratio) in the HMDA data set

Page 9: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Linear probability model: HMDA data

What is the predicted value for P/I ratio = .3?

Calculating “effects”: increase P/I ratio from .3 to .4:

The effect on the probability of denial of an increase in P/I ratio from .3 to .4 is to increase the probability by .061, that is, by 6.1 percentage points.

Page 10: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Next include black as a regressor:

Predicted probability of denial: for black applicant with P/I ratio = .3:

for white applicant with P/I ratio = .3:

difference = .177 = 17.7 percentage points. Coefficient on black is significant at the 5% level.

Page 11: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The linear probability model: Summary Models probability as a linear function of X. Advantages:

simple to estimate and to interpret inference is the same as for multiple regression (need

heteroskedasticity-robust standard errors) Disadvantages:

Does it make sense that the probability should be linear in X?

Predicted probabilities can be < 0 or > 1! These disadvantages can be solved by using a

nonlinear probability model: probit and logit regression.

Page 12: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Probit and Logit Regression

The problem with the linear probability model is that it models the probability of Y = 1 as being linear:

Instead, we want: 0 ≤ Pr(Y = 1|X) ≤ 1 for all X. Pr(Y = 1|X) to be increasing in X (for >

0).This requires a nonlinear functional form for

the probability. How about an “S-curve”.

Page 13: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The probit model satisfies these conditions: 0 ≤ Pr(Y = 1|X) ≤ 1 for all X. Pr(Y = 1|X) to be increasing in X (for > 0).

Page 14: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Probit regression models the probability that Y=1 using the cumulative standard normal distribution function, evaluated at

Φ is the cumulative normal distribution function. is the “z-value” or “z-index” of the

probit model.Example: Suppose , so

Pr(Y = 1|X = .4) = area under the standard normal density to left of z = -.8, which is

Page 15: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Pr(Z ≤- 0.8) = .2119

Page 16: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Why use the cumulative normal probabilitydistribution?

The “S-shape” gives us what we want: 0 ≤ Pr(Y = 1|X) ≤ 1 for all X. Pr(Y = 1|X) to be increasing in X (for > 0). Easy to use - the probabilities are tabulated

in the cumulative normal tables. Relatively straightforward interpretation:

z-value = is the predicted z-value, given X is the change in the z-value for a unit change

in X

Page 17: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Another way to see the probit model is through the interpretation of a latent variable.

Suppose there exists a latent variable ,

where is unobserved. The observed Y is 1 if , and is 0 if

< 0.Note that implies homoscedasticity.

Page 18: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

In other words,

Similarly,

Page 19: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Furthermore, since we only can estimate and , not 0 , and separately. It is assumed that = 1.

Therefore,

Page 20: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference
Page 21: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

STATA Example: HMDA data, ctd.

Positive coefficient: does this make sense? Standard errors have usual interpretation. Predicted probabilities:

Effect of change in P/I ratio from .3 to .4:Pr(deny = = .4) = .159

Predicted probability of denial rises from .097 to .159.

Page 22: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Probit regression with multiple regressors

Φ is the cumulative normal distribution function.

is the “z-value” or “z-index” of the probit model.

is the effect on the z-score of a unit change in X1, holding constant X2.

Page 23: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference
Page 24: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference
Page 25: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

STATA Example: HMDA data, ctd.

Is the coefficient on black statistically significant?

Estimated effect of race for P/I ratio = .3:

Difference in rejection probabilities = .158 (15.8 percentage points)

Page 26: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Logit regression

Logit regression models the probability of Y = 1 as the cumulative standard logistic distribution function, evaluated at

F is the cumulative logistic distribution function:

Page 27: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

where Example:

Why bother with logit if we have probit? Historically, logit is more convenient to

compute. In practice, very similar to probit.

Page 28: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference
Page 29: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Predicted probabilities from estimated probit and logit models usually are very close.

Page 30: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Estimation and Inference

Probit model:

Estimation and inference How to estimate and ? What is the sampling distribution of the estimators? Why can we use the usual methods of inference?

First discuss nonlinear least squares (easier to explain).

Then discuss maximum likelihood estimation (what is actually done in practice).

Page 31: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Probit estimation by nonlinear least squares

Recall OLS:

The result is the OLS estimators and .In probit, we have a different regression function -

the nonlinear probit model. So, we could estimate and by nonlinear least squares:

Solving this yields the nonlinear least squares estimator of the probit coefficients.

Page 32: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

How to solve this minimization problem? Calculus doesn’t give and explicit solution. Must be solved numerically using the

computer, e.g. by “trial and error” method of trying one set of values for O , then trying another, and another,...

Better idea: use specialized minimization algorithms.

In practice, nonlinear least squares isn’t used because it isn’t efficient - an estimator with a smaller variance is...

Page 33: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Probit estimation by maximum likelihood

The likelihood function is the conditional density of Y1, … , Yn given X1, … , Xn, treated as a function of the unknown parameters and .

The maximum likelihood estimator (MLE) is the value of ( , ) that maximize the likelihood function.

The MLE is the value of ( , ) that best describe the full distribution of the data.

In large samples, the MLE is: consistent. normally distributed. efficient (has the smallest variance of all estimators).

Page 34: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Special case: the probit MLE with no X

Y = 1 with probability p, =0 with probability (1-p) (Bernoulli distribution)

Data: Y1, … , Yn, i.i.d.

Derivation of the likelihood starts with the density of Y1:

so

Page 35: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Joint density of (Y1, Y2):

Because Y1 and Y2 are independent,

Joint density of (Y1, … , Yn):

Page 36: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The likelihood is the joint density, treated as a function of the unknown parameters, which is p,

The MLE maximizes the likelihood. It is standard to work with the log likelihood, ln f (p; Y1, … , Yn):

Page 37: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Solving for p yields the MLE. That is, satisfies,

Page 38: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The MLE in the “no-X” case (Bernoulli distribution):

For Yi i.i.d. Bernoulli, the MLE is the “natural” estimator of p, the fraction of 1’s, which is YN .

We already know the essentials of inference: In large n, the sampling distribution of = is

normally distributed. Thus inference is “as usual”: hypothesis testing via t-

statistic, confidence interval as ±1.96SE. STATA note: to emphasize requirement of large-n,

the printout calls the t-statistic the z-statistic.

Page 39: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The probit likelihood with one X

The derivation starts with the density of Y1, given X1:

Page 40: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The probit likelihood function is the joint density of Y1, … , Yn given X1, … , Xn, treated as a function of ,

Page 41: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The probit likelihood function:

Can’t solve for the maximum explicitly. Must maximize using numerical methods. As in the case of no X, in large samples:

are consistent. are normally distributed. Their standard errors can be computed. Testing, confidence intervals proceeds as usual.

For multiple X’s, see SW App. 11.2.

Page 42: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The logit likelihood with one X

The only difference between probit and logit is the functional form used for the probability: Φ is replaced by the cumulative logistic function.

Otherwise, the likelihood is similar; for details see SW App. 11.2.

As with probit, are consistent. are normally distributed. Their standard errors can be computed. Testing, confidence intervals proceeds as usual.

Page 43: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Measures of fit

The and don’t make sense here (why?). So, two other specialized measures are used:

The fraction correctly predicted = fraction of Y ’s for which predicted probability is >50% (if Yi = 1) or is <50% (if Yi = 0).

The pseudo-R2 measure the fit using the likelihood function: measures the improvement in the value of the log likelihood, relative to having no X’s (see SW App. 9.2). This simplifies to the R2 in the linear model with normally distributed errors.

Page 44: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Marginal Effect

However, what we really care is not itself. We want to know how the change of X will affect the probability that Y = 1. For the probit model,

where is pdf of the standard normal distribution.

Page 45: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The effect of the change in X on Pr(Y = 1|X) depends on the value of X. In practice, we usually evalute the marginal effect at the sample average . i.e. The marginal effect is

Page 46: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

When X is binary, it is not clear what does the sample average mean.

The marginal effect then measures the probability difference between X = 1 and X = 0.

In STATA, the command dprobit reports the marginal effect, instead of

Page 47: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Application to the Boston HMDA Data

Mortgages (home loans) are an essential part of buying a home.

Is there differential access to home loans by race?

If two otherwise identical individuals, one white and one black, applied for a home loan, is there a difference in the probability of denial?

Page 48: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The HMDA Data Set

Data on individual characteristics, property characteristics, and loan denial/acceptance.

The mortgage application process in 1990-1991: Go to a bank or mortgage company. Fill out an application (personal+financial info). Meet with the loan officer.

Then the loan officer decides - by law, in a race-blind way. Presumably, the bank wants to make profitable loans, and the loan officer doesn’t want to originate defaults.

Page 49: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

The loan officer’s decision

Loan officer uses key financial variables: P/I ratio housing expense-to-income ratio loan-to-value ratio personal credit history

The decision rule is nonlinear: loan-to-value ratio > 80% loan-to-value ratio > 95% (what happens in

default?) credit score

Page 50: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Regression specifications

linear probability model probitMain problem with the regressions so far: potential

omitted variable bias. All these enter the loan officer decision function, are or could be correlated with race:

wealth, type of employment credit history family statusVariables in the HMDA data set....

Page 51: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference
Page 52: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference
Page 53: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference
Page 54: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference
Page 55: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference
Page 56: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Summary of Empirical Results Coefficients on the financial variables make sense. Black is statistically significant in all

specifications. Race-financial variable interactions aren’t

significant. Including the covariates sharply reduces the

effect of race on denial probability. LPM, probit, logit: similar estimates of effect of

race on the probability of denial. Estimated effects are large in a “real world”

sense.

Page 57: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Remaining threats to internal, external validity

Internal validity. omitted variable bias

what else is learned in the in-person interviews? functional form misspecification (no...) measurement error (originally, yes; now, no...) selection

random sample of loan applications define population to be loan applicants

simultaneous causality (no) External validity This is for Boston in 1990-91. What about today?

Page 58: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Misspecification

Misspecification is a big prolem in the maximum likelihood estimation. We only consider the problem of heteroscedasticity.

By assuming = 1 in the probit model, we only estimate and in the likelihood function. If ui is heteroscedastic such that Var(ui) = , then we need to estimate , and .

Page 59: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

But the problem can be more than increasing number of parameters to be estimated.Suppose the heteroscedasticity is of the formthen

The presence of heteroscedasticity causes inconsistency because the assumption of a constant is what allows us to identify 0 and .

Page 60: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

To take a very particular but informative case, suppose that the heteroscedasticity takes the form, then

It is clear that our estimates will be inconsistent for and , but consistent for and .

The problem of misspecification such as heteorscedasticity calls for the use of linear probability model where although with White’s heteroscedasticity-consistent covariance matrix is not efficient, it is at least consistent.

Page 61: Regression with a Binary Dependent Variable.  Linear Probability Model  Probit and Logit Regression Probit Model Logit Regression  Estimation and Inference

Summary If Yi is binary, then E(Y|X) = Pr(Y=1|X). Three models:

linear probability model (linear multiple regression) probit (cumulative standard normal distribution) logit (cumulative standard logistic distribution)

LPM, probit, logit all produce predicted probabilities. Effect of ΔX is change in conditional probability that Y

= 1. For logit and probit, this depends on the initial X. Probit and logit are estimated via maximum likelihood.

Coefficients are normally distributed for large n. Large-n hypothesis testing, confidence intervals are as usual.