empirical methods in trade: analyzing trade costs and ...Β Β· ordinary least squares (ols) estimator...
TRANSCRIPT
-
Empirical Methods in Trade: Analyzing Trade Costs and Trade Facilitation
June 2015
Bangkok, Thailand
Cosimo Beverelli Simon Neumueller
(ERSD/WTO) (ERSD/WTO)
1
-
Content (I)
a. Classical regression model
b. Introduction to panel data analysis
c. Basic regression in Stata (see do file βols.doβ)
d. Panel data regressions in Stata (see do file βpanel.doβ)
e. Binary dependent variable models in cross-section
f. Binary dependent variable models with panel data
g. Binary dependent variable models: Examples of firm-level analysis
h. Binary dependent variable models in Stata
i. Count models
j. Count models in Stata
2
-
Content (II)
k. Censoring and truncation
l. Tobit (censored regression) model
m. Alternative estimators for censored regression models
n. Endogeneity
o. Instrumental variables
p. Instrumental variables in practice
q. Endogeneity: example with firm-level analysis
r. Instrumental variables models in Stata
s. Sample selection models
t. Sample selection: An example with firm-level analysis
u. Sample selection models in Stata
3
-
Content (I)
a. Classical regression model
b. Introduction to panel data analysis
c. Basic regression in Stata (see do file βols.doβ)
d. Panel data regressions in Stata (see do file βpanel.doβ)
e. Binary dependent variable models in cross-section
f. Binary dependent variable models with panel data
g. Binary dependent variable models: Examples of firm-level analysis
h. Binary dependent variable models in Stata
i. Count models
j. Count models in Stata
4
-
Content (I)
a. Classical regression model
4
-
a. Classical regression model
β’ Linear prediction
β’ Ordinary least squares (OLS) estimator
β’ Interpretation of coefficients
β’ Variance of the OLS estimator
β’ Hypothesis testing
β’ Example
5
-
6
Linear prediction
1. Starting from an economic model and/or an economic intuition, the purpose of regression is to test a theory and/or to estimate a relationship
2. Regression analysis studies the conditional prediction of a dependent (or endogenous) variable π¦ given a vector of regressors (or predictors or covariates) π, πΈ[π¦|π]
3. The classical regression model is:
β’ A stochastic model: π¦ = πΈ π¦ π + π, where Ξ΅ is an error (or disturbance) term
β’ A parametric model: πΈ π¦ π₯ = π(π, π½), where π(β) is a specified function and π½ a vector of parameters to be estimated
β’ A linear model in parameters: π(β) is a linear function, so: πΈ π¦ π = πβ²π½
-
7
Ordinary least squares (OLS) estimator
β’ With a sample of N observations (π = 1,β¦ ,π) on π¦ and π, the linear regression model is:
π¦π = π₯β²ππ½ + ππ
where π₯π is a πΎ Γ 1 regression vector and π½ is a πΎ Γ 1 parameter vector (the first element of π₯π is a 1 for all π)
β’ In matrix notation, this is written as π¦ = ππ½ + π
β’ OLS estimator of π½ minimizes the sum of squared errors:
ππ2 = πβ²π = (π¦ β ππ½)β²(π¦ β ππ½)
π
π=1
which (provided that π is of full column rank πΎ) yields:
π½ ππΏπ = (πβ²π)β1πβ²π¦ = π₯ππ₯β²π
π
β1
π₯ππ¦ππ
β’ This is the best linear predictor of π¦ given π if a squared loss error function πΏ π = π2 is used (where π β‘ π¦ β π¦ is the prediction error)
-
8
Interpretation of coefficients
β’ Economists are generally interested in marginal effects and elasticities
β’ Consider the model:
π¦ = π½π₯ + π
β’ π½ =ππ¦
ππ₯ gives the marginal effect of π₯ on π¦
β’ If there is a dummy variable D, the model is:
π¦ = π½π₯ + πΏπ· + π
β’ πΏ =ππ¦
ππ· gives the difference in π¦ between the observations for which
π· = 1 and the observations for which π· = 0
β’ Example: if π¦ is firm size and π· = 1 if the firm exports (and zero otherwise), the estimated coefficient on π· is the difference in size between exporters and non-exporters
-
9
Interpretation of coefficients (ctβd)
β’ Often, the baseline model is not a linear one, but is based on exponential mean:
π¦ = exp (π½π₯)π
β’ This implies a log-linear model of the form: ln y = π½π₯ + ln (π)
β’ 100 β π½ is the semi-elasticity of π¦ with respect to π₯ (percentage change in π¦ following a marginal change in π₯)
β’ If the log-linear model contains a dummy variable: ln y = π½π₯ + πΏπ· + ln (π)
β’ The percentage change (π) in π¦ from switching on the dummy is equal to exp πΏ β 1
β’ You can do better and estimate π =exp [πΏ ]
exp [1
2π£ππ πΏ ]
β 1, which is consistent and
(almost) unbiased
-
10
Interpretation of coefficients (ctβd)
β’ In many applications, the estimated equation is log-log:
ln π¦ = π½ ln π₯ + π
β’ π½ is the elasticity of π¦ with respect to π₯ (percentage change in π¦ following a unit percentage increase in π₯
β’ Notice that dummies enter linearly in a log-log model, so their interpretation is the one given in the previous slide
-
11
Variance of the OLS estimator
π π½ = πβ²π β1πβ²π π¦ π πβ²π β1 (1)
β’ Assuming that π is non-stochastic, π π¦ = π π = Ξ© so (1) becomes:
π π½ = πβ²π β1πβ²Ξ©π πβ²π β1 (2)
β’ Notice that we always assume independence (πΆππ£(ππππ|π₯π , π₯π = 0 for π β
π) (conditionally uncorrelated observations), therefore Ξ© is a diagonal matrix
-
12
Variance of the OLS estimator (ctβd)
Case 1: Homoskedasticity
β’ ππ is i.i.d. (0, π2) for all i: Ξ© = π2πΌ, where πΌ is identity matrix of dimension
N
β’ π π½ = π2 πβ²π β1
β’ A consistent estimator of Ο2 is π β²π
πβπΎ where π β‘ π¦ β ππ½
β’ Standard error of π½ π = π 2 πβ²π ππ
β1
β’ See do file βols.doβ
-
13
Variance of the OLS estimator (ctβd)
Case 2: Heteroskedasticity
β’ ππ is ~(0, ππ2)
β’ In this case, we need to estimate Ξ© in sandwich formula (2)
β’ Huber-White βrobustβ (i.e., heteroskedasticity-consistent) standard errors
use Ξ© = Diag(ππ 2) where ππ β‘ π¦π β ππ
β²π½
β’ Stata computes (π
πβπΎ) πβ²π β1πβ²Ξ© π πβ²π β1 so that in case of
homoskedastic errors the usual OLS standard errors would be obtained
β’ See do file βols.doβ
-
14
Hypothesis testing
β’ If we assume that π|π~π(0, Ξ©), then π½ ~π(π½, π π½ )
β’ Hypothesis testing based on Normal, t and F distributions
β’ The simplest test is whether a regression coefficient is statistically
different from zero: π»0: π½ π = 0
β’ Under the null hypothesis (π»0):
π½ π~π(0, πβ²π ππ
β1πβ²Ξ© π πβ²π ππ
β1)
-
15
Hypothesis testing (ctβd)
β’ The test-statistics is:
π‘π β‘π½ π β 0
π . π. (π½ π)~π‘πβπΎ
where π‘πβπΎ is the Studentβs t-distribution with π β πΎ degrees of freedom
β’ Large values of π‘π lead to rejection of the null hypothesis. In other words, if
π‘π is large enough, π½ π is statistically different from zero
β’ Typically, a t-statistic above 2 or below -2 is considered significant at the 95% level (Β±1.96 if N is large)
β’ The p-value gives the probability that π‘π is less than the critical value for
rejection. If π½ π is significant at the 95% (99%) level, then p-value is less than
0.05 (0.01)
-
16
Hypothesis testing (ctβd)
β’ Tests of multiple hypothesis of the form π π½ = πΌ, where π is an π ΓπΎ matrix (π is the number of restrictions tested) can easily be constructed
β’ Notable example: global F-test for the joint significance of the complete set of regressors:
πΉ =πΈππ/(πΎ β 1)
π ππ/(π β πΎ)~πΉ(πΎ β 1,π β πΎ)
β’ It is easy to show that:
πΉ =π 2/(πΎ β 1)
(1 β π 2)/(π β πΎ)~πΉ(πΎ β 1,π β πΎ)
-
17
Example: Wage equation for married working women
β’ regress lwage educ exper age /* see do file βols.doβ */
-
17
Example: Wage equation for married working women
β’ regress lwage educ exper age /* see do file βols.doβ */
Dep var: Ln(Wage) Coeff. Std. Err. t t > ΣpΣ 95% Conf. interval
Education .1092758 .0142011 7.69 0.000 .0813625 .1371891
Experience .0163246 .0045966 3.55 0.000 .0072897 .0253595
Age -.0014064 .0048019 -0.29 0.770 -.0108448 .0080321
Constant -.3469375 .2633613 -1.32 0.188 -.0108448 .0080321
Number of obs = 428
F( 3, 424) = 24.65
Prob > F = 0.0000
R2 = 0.1485
Adj R2 = 0.1425
Root MSE = .66969
-
17
Example: Wage equation for married working women
β’ regress lwage educ exper age /* see do file βols.doβ */
Dep var: Ln(Wage) Coeff. Std. Err. t t > ΣpΣ 95% Conf. interval
Education .1092758 .0142011 7.69 0.000 .0813625 .1371891
Experience .0163246 .0045966 3.55 0.000 .0072897 .0253595
Age -.0014064 .0048019 -0.29 0.770 -.0108448 .0080321
Constant -.3469375 .2633613 -1.32 0.188 -.0108448 .0080321
Number of obs = 428
F( 3, 424) = 24.65
Prob > F = 0.0000
R2 = 0.1485
Adj R2 = 0.1425
Root MSE = .66969
Coefficient >(
-
17
Example: Wage equation for married working women
β’ regress lwage educ exper age /* see do file βols.doβ */
Dep var: Ln(Wage) Coeff. Std. Err. t t > ΣpΣ 95% Conf. interval
Education .1092758 .0142011 7.69 0.000 .0813625 .1371891
Experience .0163246 .0045966 3.55 0.000 .0072897 .0253595
Age -.0014064 .0048019 -0.29 0.770 -.0108448 .0080321
Constant -.3469375 .2633613 -1.32 0.188 -.0108448 .0080321
Number of obs = 428
F( 3, 424) = 24.65
Prob > F = 0.0000
R2 = 0.1485
Adj R2 = 0.1425
Root MSE = .66969
The t-values test the hypothesis that the coefficient is different from 0. To reject
this, you need a t-value greater than 1.96 (at 5% confidence level). You can get the t-values by dividing the coefficient by its
standard error
-
17
Example: Wage equation for married working women
β’ regress lwage educ exper age /* see do file βols.doβ */
Dep var: Ln(Wage) Coeff. Std. Err. t t > ΣpΣ 95% Conf. interval
Education .1092758 .0142011 7.69 0.000 .0813625 .1371891
Experience .0163246 .0045966 3.55 0.000 .0072897 .0253595
Age -.0014064 .0048019 -0.29 0.770 -.0108448 .0080321
Constant -.3469375 .2633613 -1.32 0.188 -.0108448 .0080321
Number of obs = 428
F( 3, 424) = 24.65
Prob > F = 0.0000
R2 = 0.1485
Adj R2 = 0.1425
Root MSE = .66969
Two-tail p-values test the hypothesis that each coefficient is different from 0. To reject this null hypothesis at 5% confidence level, the p-value has to be lower
than 0.05. In this case, only education and experience are
significant
-
17
Example: Wage equation for married working women
β’ regress lwage educ exper age /* see do file βols.doβ */
Dep var: Ln(Wage) Coeff. Std. Err. t t > ΣpΣ 95% Conf. interval
Education .1092758 .0142011 7.69 0.000 .0813625 .1371891
Experience .0163246 .0045966 3.55 0.000 .0072897 .0253595
Age -.0014064 .0048019 -0.29 0.770 -.0108448 .0080321
Constant -.3469375 .2633613 -1.32 0.188 -.0108448 .0080321
Number of obs = 428
F( 3, 424) = 24.65
Prob > F = 0.0000
R2 = 0.1485
Adj R2 = 0.1425
Root MSE = .66969
Test statistics for the global F-test. p-value < 0.05 β statistically significant relationship
-
Content (I)
a. Classical regression model
b. Introduction to panel data analysis
c. Basic regression in Stata (see do file βols.doβ)
d. Panel data regressions in Stata (see do file βpanel.doβ)
e. Binary dependent variable models in cross-section
f. Binary dependent variable models with panel data
g. Binary dependent variable models: Examples of firm-level analysis
h. Binary dependent variable models in Stata
i. Count models
j. Count models in Stata
18
-
Content (I)
a. Introduction to panel data analysis
18
-
b. Introduction to panel data analysis
β’ Definition and advantages
β’ Panel data models and estimation
β’ Fixed effects model
β’ Alternatives to the fixed effects estimator
β’ Random effects model
β’ Hausman test and test of overidentifying restrictions
19
-
20
Definition and advantages
β’ Panel data are repeated observations on the same cross section
β’ Example: a cross-section of π firms observed over π time periods
β’ There are three advantages of panel data:
1. Increased precision in the estimation
2. Possibility to address omitted variable problems
3. Possibility of learning more about dynamics of individual behavior
β’ Example: in a cross-section of firms, one may determine that 20% are exporting, but panel data are needed to determine whether the same 20% export each year
-
21
Panel data models and estimation
β’ The general linear panel data model permits the intercept and the slope coefficients to vary across individuals and over time:
π¦ππ‘ = πΌππ‘ + π₯β²ππ‘π½ππ‘ + πππ‘ , π = 1, β¦ , π, π‘ = 1,β¦ , π
β’ The number of parameters to be estimated is larger than the number of observations, ππ
β’ Restrictions on how πΌππ‘ and π½ππ‘ vary and on the behavior of the error term are needed
β’ In this context, we mainly discuss a specification of the general linear panel data model with individual-specific effects, the so-called fixed effects model
-
22
Fixed effects model
β’ The fixed effects model is an individual-specific effects model
1. It allows each individual to have a specific intercept (individual effect), while the slope parameters are the same:
π¦ππ‘ = πΌπ + π₯β²ππ‘π½ + πππ‘ (3)
2. The individual-specific effects πΌπ are random variables that capture unobserved heterogeneity
β’ Example: πΌπ capture firm-specific (and not time-varying) characteristics that are not observable to the researcher (say, access to credit) and affect how much the firm exports (π¦ππ‘)
3. Individual effects are potentially correlated with the observed regressors π₯β²ππ‘
β’ Example: access to credit is potentially correlated with observable firm characteristics, such as size
-
23
Fixed effects estimator
β’ Take the model: π¦ππ‘ = πΌπ + π₯β²ππ‘π½ + πππ‘
β’ Take the individual average over time:
π¦ π = πΌπ + π₯ β²ππ½ + π π
β’ Subtracting the two equations we obtain: π¦ππ‘ β π¦ π = (π₯ππ‘βπ₯ π)β²π½ + (πππ‘βπ π)
β’ OLS estimation of this equation gives the within-estimator (also called
fixed effects estimator) π½ πΉπΈ
β’ π½ πΉπΈ measures the association between individual-specific deviations of regressors from their individual-specific time averages and individual-specific deviations of the dependent variable from its individual-specific time average
-
24
Fixed effects estimator (ctβd)
β’ There are two potential problems for statistical inference: heteroskedasticity and autocorrelation
β’ Correct statistical inference must be based on panel-robust sandwich standard errors
β’ Stata command: vce(cluster id) or robust cluster(id), where id is your panel variable
β’ For instance, if you observe firms over time, your id variable is the firm identifier
β’ You can also use panel bootstrap standard errors, because under the key assumption that observations are independent over π, the bootstrap procedure of re-sampling with replacement over π is justified
β’ Stata command: vce(bootstrap, reps(#)) where # is the number of pseudo-samples you want to use
β’ See do file βpanel.doβ
-
25
Fixed effects estimator (ctβd)
β’ Applying the within-transformation seen above, we do not have to worry about the potential correlation between πΌπ and π₯β²ππ‘
β’ As long as πΈ πππ‘ π₯ππ‘ , β¦ , π₯ππ‘ = 0 (strict exogeneity) holds, π½ πΉπΈ is consistent
β’ Note: strict exogeneity implies that the error term has zero mean conditional on past, present and future values of the regressors
β’ In words, fixed effects gives consistent estimates in all cases in which we suspect that individual-specific unobserved variables are correlated with the observed ones (and this is normally the caseβ¦)
β’ The drawback of fixed effect estimation is that it does not allow to identify the coefficients of time-invariant regressors (because if π₯ππ‘ = π₯π, π₯ππ‘ β π₯ π = 0)
β’ Example: it is not possible to identify the effect of foreign ownership on export values if ownership does not vary over time
-
26
Alternatives to the fixed effects estimator: LSDV and brute force OLS
β’ The least-squares dummy variable (LSDV) estimator estimates the model without the within transformation and with the inclusion of π individual dummy variables
β’ It is exactly equal to the within estimatorβ¦
β’ β¦but the cluster-robust standard errors differ and if you have a βsmall panelβ (large π, small π) you should prefer the ones from within estimation
β’ One can also apply OLS to model (1) by brute force, however this implies inversion of an (π Γ πΎ) Γ (π Γ πΎ) matrixβ¦
β’ See do file βpanel.doβ
-
27
Random effects model
β’ If you believe that there is no correlation between unobserved individual effects and the regressors, the random effects model is appropriate
β’ The random effect estimator applies GLS (generalized least squares) to the model:
π¦ππ‘ = π₯β²ππ‘π½ + (πππ‘+πΌπ) = π₯β²ππ‘π½ + (π’ππ‘)
β’ This model assumes πππ‘~π. π. π. 0, ππ2 and πΌπ~π. π. π. 0, ππΌ
2 , so π’ππ‘ is equicorrelated
β’ GLS is more efficient than OLS because π(π’ππ‘) β π2πΌ and it can be
imposed a structure, so GLS is feasible
β’ If there is no correlation between unobserved individual effects and the
regressors, π½ π πΈ is efficient and consistent
β’ If this does not hold, π½ π πΈ is not consistent because the error term π’ππ‘ is correlated with the regressors
-
28
Hausman test and test of overidentifying restrictions
β’ To decide whether to use fixed effects or random effects, you need to test if the errors are correlated or not with the exogenous variables
β’ The standard test is the Hausman Test: null hypothesis is that the errors are not correlated with the regressors, so under π»0 the preferred model is random effects
β’ Rejection of π»0 implies that you should use the fixed effects model
β’ A serious shortcoming of the Hausman test (as implemented in Stata) is that it cannot be performed after robust (or bootstrap) VCV estimation
β’ Fortunately, you can use a test of overidentifying restrictions (Stata command: xtoverid after the RE estimation)
β’ Unlike the Hausman version, the test reported by xtoverid extends straightforwardly to heteroskedastic- and cluster-robust versions, and is guaranteed always to generate a nonnegative test statistic
β’ Rejection of π»0 implies that you should use the fixed effects model
β’ See do file βpanel.doβ
-
Content (I)
a. Classical regression model
b. Introduction to panel data analysis
c. Basic regression in Stata (see do file βols.doβ)
d. Panel data regressions in Stata (see do file βpanel.doβ)
e. Binary dependent variable models in cross-section
f. Binary dependent variable models with panel data
g. Binary dependent variable models: Examples of firm-level analysis
h. Binary dependent variable models in Stata
i. Count models
j. Count models in Stata
29
-
Content (I)
a. Basic regression in Stata (see do file βols.doβ)
29
-
c. Basic regression in Stata
β’ Stataβs regress command runs a simple OLS regression
β’ Regress depvar indepvar1 indepvar2 β¦., options
β’ Always use the option robust to ensure that the covariance estimator can handle heteroskedasticity of unknown form
β’ Usually apply the cluster option and specify an appropriate level of clustering to account for correlation within groups
β’ Rule of thumb: apply cluster to the most aggregated level of variables in the model
β’ Example: In a model with data by city, state, and country, cluster by country
30
-
Content (I)
a. Classical regression model
b. Introduction to panel data analysis
c. Basic regression in Stata (see do file βols.doβ)
d. Panel data regressions in Stata (see do file βpanel.doβ)
e. Binary dependent variable models in cross-section
f. Binary dependent variable models with panel data
g. Binary dependent variable models: Examples of firm-level analysis
h. Binary dependent variable models in Stata
i. Count models
j. Count models in Stata
31
-
Content (I)
a. Panel data regressions in Stata (see do file βpanel.doβ)
31
-
d. Panel data regressions in Stata
β’ Fixed effects (within) estimation
β’ Brute force OLS
β’ LSDV
β’ Random effects
β’ Testing for fixed vs. random effects
32
-
33
Fixed effects (within) estimation
β’ A variety of commands are available for estimating fixed effects regressions
β’ The most efficient method is the fixed effects regression (within estimation), xtreg
β’ Stataβs xtreg command is purpose built for panel data regressions
β’ Use the fe option to specify fixed effects
β’ Make sure to set the panel dimension before using the xtreg command, using xtset
β’ For example:
β’ xtset countries sets up the panel dimension as countries
β’ xtreg depvar indepvar1 indepvar2 β¦, fe runs a regression with fixed effects by country
β’ Hint: xtset cannot work with string variables, so use (e.g.) egen countries = group(country) to convert string categories to numbers
-
34
Fixed effects (within) estimation (ctβd)
β’ As with regress, always specify the robust option with xtreg
β’ xtreg, robust will automatically correct for clustering at the level of the panel variable (firms in the previous example)
β’ Note that xtreg can only include fixed effects in one dimension. For additional dimensions, enter the dummies manually (see slide 8)
-
35
Brute force OLS
β’ The fixed effects can enter as dummies in a standard regression (brute force OLS)
β’ Regress depvar indepvar1 indepvar2 β¦ dum1 dum2 β¦., options
β’ Specify dum* to include all dummy variables with the same stem
β’ Stata automatically excludes one dummy if a constant is retained in the model
β’ With the same clustering specification, results should be identical between regress with dummy variables and xtreg, fe
-
36
Brute force OLS (ctβd)
β’ To create dummy variables based on categories of another variable, use the tabulate command with the gen() option
β’ For example:
β’ Quietly tabulate country, gen(ctry_dum_)
β’ Will produce ctry_dum_1, ctry_dum_2, etc. automatically
β’ Then regress depvar indepvar1 indepvar2 β¦ ctry_dum_*, robust cluster()
β’ Or you can use the i.varname command to creates dummies
β’ regress depvar indepvar1 indepvar2 β¦ i.country, robust cluster()
-
37
LSDV
β’ The least-squares dummy variable (LSDV) estimator estimates the model without the within transformation and with the inclusion of π individual dummy variables
β’ areg depvar indepvar1 indepvar2 β¦ , absorb(varname) robust cluster()
β’ where varname is the categorical variable to be absorbed
-
38
Random effect estimation
β’ By specifying the re option, xtreg can also estimate random effects models
β’ xtreg depvar indepvar1 indepvar2 β¦, re vce(robust)
β’ As for the fixed effects model, you need to specify xtset first
β’ xtset countries
β’ xtreg depvar indepvar1 indepvar2 β¦, robust re
β’ Runs a regression with random effects by country
β’ Fixed and random effects can be included in the same model by including dummy variables
β’ An alternative that can also be used for multiple dimensions of random effects is xtmixed (outside our scope)
-
39
Testing for fixed vs. random effects
β’ The fixed effects model always gives consistent estimates whether the data generating process is fixed or random effects, but random effects is more efficient in the latter case
β’ The random effects model only gives consistent estimates if the data generating process is random effects
β’ Intuitively, if random effects estimates are very close to fixed effects estimates, then using random effects is probably an appropriate simplification
β’ If the estimates are very different, then fixed effects should be used
-
40
Testing for fixed vs. random effects (ctβd)
β’ The Hausman test exploits this intuition
β’ To run it:
β’ xtreg β¦ , fe
β’ estimates store fixed
β’ xtreg β¦, re
β’ estimates store random
β’ hausman fixed random
β’ If the test statistic is large, reject the null hypothesis that random effects is an appropriate simplification
β’ Caution: the Hausman test has poor properties empirically and you can only run it on fixed and random effects estimates that do not include the robust option
β’ The xtoverid test (after xtreg, fe) should always be preferred to the Hausman test because it allows for cluster-robust standard errors
-
Content (I)
a. Classical regression model
b. Introduction to panel data analysis
c. Basic regression in Stata (see do file βols.doβ)
d. Panel data regressions in Stata (see do file βpanel.doβ)
e. Binary dependent variable models in cross-section
f. Binary dependent variable models with panel data
g. Binary dependent variable models: Examples of firm-level analysis
h. Binary dependent variable models in Stata
i. Count models
j. Count models in Stata
41
-
Content (I)
a. Binary dependent variable models in cross-section
41
-
e. Binary dependent variable models in cross-section
β’ Binary outcome
β’ Latent variable
β’ Linear probability model (LMP)
β’ Probit model
β’ Logit model
β’ Marginal effects
β’ Odds ratio in logit model
β’ Maximum likelihood (ML) estimation
β’ Rules of thumb
42
-
43
Binary outcome
β’ In many applications the dependent variable is not continuous but qualitative, discrete or mixed:
β’ Qualitative: car ownership (Y/N)
β’ Discrete: education degree (Ph.D., University degree,β¦, no education)
β’ Mixed: hours worked per day
β’ Here we focus on the case of a binary dependent variable
β’ Example with firm-level data: exporter status (Y/N)
-
44
Binary outcome (ctβd)
β’ Let π¦ be a binary dependent variable:
π¦ = 1 π€ππ‘β ππππππππππ‘π¦ π0 π€ππ‘β ππππππππππ‘π¦ 1 β π
β’ A regression model is formed by parametrizing the probability π to depend on a vector of explanatory variables π and a πΎ Γ 1 parameter vector π½
β’ Commonly, we estimate a conditional probability:
ππ = Pr π¦π = 1 π = πΉ(ππβ²π½) (1)
where πΉ(β) is a specified function
-
45
Intuition for πΉ(β): latent variable
β’ Imagine we wanted to estimate the effect of π on a continuous variable π¦β
β’ The βindex functionβ model we would like to estimate is:
π¦πβ = ππβ²π½ β ππ
β’ However, we do not observe π¦β but only the binary variable π¦
π¦ = 1 ππ π¦β > 00 ππ‘βπππ€ππ π
-
46
Intuition for πΉ(β): latent variable (ctβd)
β’ There are two ways of interpreting π¦πβ:
1. Utility interpretation: π¦πβ is the additional utility that individual π would
get by choosing π¦π = 1 rather than π¦π = 0
2. Threshold interpretation: ππ is a threshold such that if ππβ²π½ > ππ, then π¦π = 1
β’ The parametrization of ππ is:
ππ = Pr π¦ = 1 π = Pr π¦
β > 0 π₯ = Pr [ πβ²π½ β π > 0 π₯
= Pr π < πβ²π½ = πΉ[πβ²π½]
where πΉ(β) is the CDF of π
-
47
Linear probability model (LMP)
β’ The LPM does not use a CDF, but rather a linear function for πΉ(β)
β’ Therefore, equation (1) becomes:
ππ = Pr π¦π = 1 π = ππβ²π½
β’ The model is estimated by OLS with error term ππ
β’ From basic probability theory, it should be the case that 0 β€ ππ β€ 1
β’ This is not necessarily the case in the LPM, because πΉ(β) in not a CDF (which is bounded between 0 and 1)
β’ Therefore, one could estimate predicted probabilities π π = ππβ²π½ that are negative or exceed 1
β’ Moreover, π ππ = ππβ²π½(1 β ππβ²π½) depends on ππ
β’ Therefore, there is heteroskedasticity (standard errors need to be robust)
β’ However, LPM provides a good guide to which variables are statistically significant
-
48
Probit model
β’ The probit model arises if πΉ(β) is the CDF of the normal distribution, Ξ¦ β
β’ So Ξ¦ πβ²π½ = π π§ ππ§π₯β²π½
ββ, where π β β‘ Ξ¦β² β is the normal pdf
-
49
Logit model
β’ The logit model arises if πΉ(β) is the CDF of the logistic distribution, Ξ(β)
β’ So Ξ πβ²π½ =ππ
β²π½
1βππβ²π½
-
50
Marginal effects
β’ For the model ππ = Pr π¦π = 1 π = πΉ ππβ²π½ β ππ, the interest lies in estimating the marginal effect of the πβth regressor on ππ:
πππ
ππ₯ππ= πΉβ² ππβ²π½ π½π
β’ In the LPM model, πππ
ππ₯ππ= π½π
β’ In the probit model, πππ
ππ₯ππ= π ππβ²π½ π½π
β’ In the logit model, πππ
ππ₯ππ= Ξ πβ²π½ [1 β Ξ ππβ²π½ ]π½π
-
51
Odds ratio in logit model
β’ The odds ratio OR β‘ π/(1 β π) is the probability that π¦ = 1 relative to the probability that π¦ = 0
β’ An odds ratio of 2 indicates, for instance that the probability that π¦ = 1 is twice the probability that π¦ = 0
β’ For the logit model:
π = ππβ²π½ (1 + ππ
β²π½)
OR = π/(1 β π) = ππβ²π½
ln ππ = πβ²π½
(the log-odds ratio is linear in the regressors)
β’ π½π is a semi-elasticity
β’ If π½π = 0.1, a one unit increase in regressor π increases the odds ratio by a
multiple 0.1
β’ See also here
http://www.ats.ucla.edu/stat/stata/library/sg124.pdf
-
52
Maximum likelihood (ML) estimation
β’ Since π¦π is Bernoulli distributed (π¦π = 0, 1), the density (pmf) is:
π π¦π π₯π = ππ
π¦π(1 β ππ)1βπ¦π
Where ππ = πΉ(ππβ²π½)
β’ Given independence over πβs, the log-likelihood is:
β π π½ = π¦π ln πΉ ππβ²π½ + (1 β π¦π) ln (1 β πΉ ππβ²π½ )
π
π=1
β’ There is no explicit solution for π½ ππΏπΈ, but if the log-likelihood is concave (as in probit and logit) the iterative procedure usually converges quickly
β’ There is no advantage in using the robust sandwich form of the VCV matrix unless πΉ(β) is mis-specified
β’ If there is cluster sampling, standard errors should be clustered
-
53
Rules of thumb
β’ The different models yield different estimates π½
β’ This is just an artifact of using different formulas for the probabilities
β’ It is meaningful to compare the marginal effects, not the coefficients
β’ At any event, the following rules of thumb apply:
π½ πΏππππ‘ β 4 π½ πΏππ
π½ ππππππ‘ β 2.5 π½ πΏππ
π½ πΏππππ‘ β 1.6 π½ ππππππ‘
(or π½ πΏππππ‘ β (π
3) π½ ππππππ‘)
β’ The differences between probit and logit are negligible if the interest lies in the marginal effects averaged over the sample
-
Content (I)
a. Classical regression model
b. Introduction to panel data analysis
c. Basic regression in Stata (see do file βols.doβ)
d. Panel data regressions in Stata (see do file βpanel.doβ)
e. Binary dependent variable models in cross-section
f. Binary dependent variable models with panel data
g. Binary dependent variable models: Examples of firm-level analysis
h. Binary dependent variable models in Stata
i. Count models
j. Count models in Stata
54
-
Content (I)
a. Binary dependent variable models with panel data
54
-
f. Binary dependent variable models with panel data
β’ Individual-specific effects binary models
β’ Fixed effects logit
55
-
56
Individual-specific effects binary models
β’ With panel data (each individual π is observed π‘ times), the natural extension of the cross-section binary models is:
πππ‘ = Pr π¦ππ‘ = 1 π₯ππ‘ , π½, πΌπ =
πΉ(πΌπ + πβ²ππ‘π½) ππ πππππππ
Ξ(πΌπ + πβ²ππ‘π½) πππ πΏππππ‘ πππππ
Ξ¦(πΌπ + πβ²ππ‘π½) πππ ππππππ‘ πππππ
β’ Random effects estimation assumes that πΌπ~π(0, π2πΌ)
-
57
Individual-specific effects binary models (ctβd)
β’ Fixed effect estimation is not possible for the probit model because there is an incidental parameters problem
β’ Estimating πΌπ (π of them) along with π½ leads to inconsistent estimators of the coefficient itself if π is finite and π β β (this problem disappears as π β β)
β’ Unconditional fixed-effects probit models may be fit with the βprobitβ command with indicator variables for the panels. However, unconditional fixed-effects estimates are biased
β’ However, fixed effects estimation is possible with logit, using a conditional MLE that uses a conditional density (which describes a subset of the sample, namely individuals that βchange stateβ)
-
58
Fixed effects logit
β’ A conditional ML can be constructed conditioning on π¦ππ‘ = ππ‘ , where 0 < π < π
β’ The functional form of Ξ(β) allows to eliminate the individual effects and to obtain consistent estimates of π½
β’ Notice that it is not possible to condition on π¦ππ‘ = 0π‘ or on π¦ππ‘ = ππ‘
β’ Observations for which π¦ππ‘ = 0π‘ or π¦ππ‘ = ππ‘ are dropped from the likelihood function
β’ That is, only the individuals that βchange stateβ at least once are included in the likelihood function
Example
β’ T = 3
β’ We can condition on π¦ππ‘ = 1π‘ (possible sequences {0,0,1}, {0,1,0} and 1,0,0 or on π¦ππ‘ = 2π‘ (possible sequences {0,1,1}, {1,0,1} and 1,1,0 )
β’ All individuals with sequences {0,0,0} and {1,1,1} are not considered
-
Content (I)
a. Classical regression model
b. Introduction to panel data analysis
c. Basic regression in Stata (see do file βols.doβ)
d. Panel data regressions in Stata (see do file βpanel.doβ)
e. Binary dependent variable models in cross-section
f. Binary dependent variable models with panel data
g. Binary dependent variable models: Examples of firm-level analysis
h. Binary dependent variable models in Stata
i. Count models
j. Count models in Stata
59
-
Content (I)
a. Binary dependent variable models: Examples of firm-level analysis
59
-
g. Binary dependent variable models: Examples of firm-level analysis
β’ Wakelin (1998)
β’ Aitken et al. (1997)
β’ Tomiura (2007)
60
../Literature/Wakelin_1998.pdf../Literature/Aitken_etal_1997.pdf../Literature/Tomiura_2007.pdf
-
61
Wakelin (1998)
β’ She uses a probit model to estimate the effects of size, average capital intensity, average wages, unit labour costs and innovation variables (exogenous variables) on the probability of exporting (dependent variable) of 320 UK manufacturing firms between 1988 and 1992
β’ Innovation variables include innovating-firms dummy, number of firmβs innovations in the past and number of innovations used in the sector
β’ Non-innovative firms are found to be more likely to export than innovative firms of the same sizeβ¦
β’ β¦However, the number of past innovations has a positive impact on the probability of an innovative firm exporting
-
62
Aitken et al. (1997)
β’ From a simple model of export behavior, they derive a probit specification for the probability that a firm exports
β’ The paper focuses on 2104 Mexican manufacturing firms between 1986 and 1990
β’ They find that locating near MNEs increases the probability of exporting
β’ Proximity to MNE increase the export probability of domestic firms regardless of whether MNEs serve local or export markets
β’ Region-specific factors, such as access to skilled labour, technology, and capital inputs, may also affect the probability of exporting
β’ The export probability is positively correlated with the capital-labor ratio in the region
-
75
Tomiura (2007)
β’ How are internal R&D intensity and external networking related with the firmβs export decision?
β’ Data from 118,300 Japanese manufacturing firms in 1998
β’ Logit model for the probability of direct export
β’ Export decision is defined as a function of R&D intensity and networking characteristics, while also controlling for capital intensity, firm size, subcontracting status, and industrial dummies
β’ 4 measures of networking status: computer networking, subsidiary networking, joint business operation, and participating in a business association
-
Content (I)
a. Classical regression model
b. Introduction to panel data analysis
c. Basic regression in Stata (see do file βols.doβ)
d. Panel data regressions in Stata (see do file βpanel.doβ)
e. Binary dependent variable models in cross-section
f. Binary dependent variable models with panel data
g. Binary dependent variable models: Examples of firm-level analysis
h. Binary dependent variable models in Stata
i. Count models
j. Count models in Stata
64
-
Content (I)
a. Binary dependent variable models in Stata
64
-
h. Binary dependent variable models in Stata
β’ Limited dependent variable models in cross section
β’ Panel data applications
65
-
66
Limited dependent variable models in cross section
β’ Stata has two built in models for dealing with binary dependent variables
β’ Probit depvar indepvar1 indepvar2 β¦, options
β’ Logit depvar indepvar1 indepvar2 β¦, options
β’ Generally speaking, results from these two models are quite close. Except in special cases, there is no general rule to prefer one over the other
β’ Example: health insurance coverage
β’ See βlim_dep_var.doβ and explanations therein
-
67
Panel data applications
β’ Probit and logit can both be estimated with random effects:
β’ To obtain probit and logit results with random effects by βidβ:
β’ xtset id
β’ xtprobit depvar indepvar1 indepvar2 β¦, re
β’ xtlogit depvar indepvar1 indepvar2 β¦, re
β’ Logit models can be consistently estimated with fixed effects, and should be preferred to probit in panel data settings
β’ To obtain logit results with fixed effects by βidβ:
β’ xtset id
β’ xtlogit depvar indepvar1 indepvar2 β¦, fe
β’ The βconditional logitβ (clogit) estimation should be preferred, however, because it allows for clustered-robust standard errors
β’ Example: co-insurance rate and health services
β’ See βlim_dep_var_panel.doβ and explanations therein
-
Content (I)
a. Classical regression model
b. Introduction to panel data analysis
c. Basic regression in Stata (see do file βols.doβ)
d. Panel data regressions in Stata (see do file βpanel.doβ)
e. Binary dependent variable models in cross-section
f. Binary dependent variable models with panel data
g. Binary dependent variable models: Examples of firm-level analysis
h. Binary dependent variable models in Stata
i. Count models
j. Count models in Stata
68
-
Content (I)
a. Count models
68
-
i. Count Models
β’ When are count models used?
β’ Poisson
β’ The first two moments of the Poisson distribution
β’ Poisson likelihood function
β’ Interpretation of coefficients
β’ Pseudo-Poisson ML
β’ Overdispersion in Poisson
β’ Negative Binomial (NB)
β’ NB: mixture density
β’ NB and overdispersion
69
-
70
When are count models used?
β’ Count data models are used to model the number of occurrences of an event in a given time-period. Here y only takes nonnegative integers values 0, 1, 2, ...
β’ For example, count models can be used to model:
β’ The number of visits to a doctor a person makes in a year
β’ The number of patent applications by a firm in a year
-
71
Poisson
β’ The natural stochastic model for counts is a Poisson point process for the occurrence of the event of interest
β’ This implies a Poisson distribution for the number of occurrences of the event with the following probability mass function:
Pr π¦π = y|π₯π = πβπππ¦
π¦!, with y = 0, 1, 2, β¦
β’ The standard assumption is that; ππ = exp (ππβ²π½)
-
72
The first two moments of the Poisson distribution
β’ The first two moments of the distribution are
πΈ π = π
π π = π.
β’ This shows equidispersion property (equality of mean and variance) of the Poisson distribution
β’ Because π π¦π|ππ = exp (ππβ²π½), the Poisson regression is intrinsically
heteroskedastic
-
73
Poisson likelihood function
β’ The likelihood function is expresses as:
πΏ(π½) = βπβexp (ππ
β²π½) exp (ππβ²π½)
π¦
π¦!
β’ So the Poisson ML estimator π½ maximises the following log-likelihood function
ln πΏ π½ = {π¦πππβ²π½ β exp ππ
β²π½ β ln π¦π
π
π=1
!}
-
74
Interpretation of the coefficients
β’ Marginal effects:
πE π¦|π
ππ₯π= π½πexp(π
β²π½)
β’ If xj is measured on a logarithmic scale, π½π is an elasticity
β’ Moreover, if π½π is twice as large as π½π, then the effect of changing the jth
regressor by one unit is twice that of changing the kth regressor by one unit
-
75
Pseudo-Poisson ML
β’ In the econometrics literature pseudo-ML estimation refers to estimating by ML under possible misspecification of the density
β’ When doubt exists about the form of the variance function, the use of the Pseudo-Poisson ML estimator is recommended
β’ Computationally this is essentially the same as Poisson ML, with the qualification that the variance matrix must be recomputed
-
76
Overdispersion in Poisson
β’ The Poisson regression model is usually too restrictive for count data. One of the most obvious problem is that the variance usually exceeds the mean, a feature called overdispersion. This has two consequences:
1. Large overdispersion leads to grossly deflated standard errors and thus grossly inflated t-statistics, and hence it is important to use robust variance estimator
2. In more complicated settings such as with truncation and censoring, overdispersion leads to the more fundamental problem of inconsistency
β’ In practice, there is often overdispersion. One way of dealing with this issue is to use a Negative Binomial model
-
77
Negative Binomial (NB)
β’ A way to relax the equidispersion restriction is to allow for unexplained randomness:
ππ = ππππ with π > 0, and i. i. d with density π(π|πΌ)
β’ The distribution of yi conditional on xi and ππ remains Poisson:
Pr π¦π = y|π₯π , ππ = πβπππ¦
π¦!=πβππππ(ππππ)
π¦
π¦!
-
78
NB: mixture density
β’ The marginal density of π¦ unconditional on π but conditional on π and πΌ, is obtained by integrating out π. This yields:
β’ β(π¦|π, πΌ) = π(π¦ π, π)π(π|πΌ)ππ,
β’ There is a closed form solution if:
1. π(π¦|π) is the Poisson density
2. π π = ππΏβ1πβππΏπΏπΏ
Ξ(πΏ) with πΏ > 0 and Ξ(. ) the gamma integral
β’ With πΈ π = 1 and V π = 1/πΏ and after some calculations we obtain the negative binomial as a mixture density
β(π¦|π, πΌ) = Ξ(πΌβ1 + π¦)
Ξ(πΌβ1)Ξ(π¦ + 1)
πΌβ1
πΌβ1 + π
πΌβ1π
πΌβ1 + π
π¦
-
79
NB and overdispersion
β’ The first two moments of the negative binomial distribution are
E π¦|π, πΌ = π
V π¦|π, πΌ = π(1 + πΌπ)
β’ Here, the variance exceeds the mean, since πΌ > 0 and π > 0. This model therefore allows for overdispersion
-
Content (I)
a. Classical regression model
b. Introduction to panel data analysis
c. Basic regression in Stata (see do file βols.doβ)
d. Panel data regressions in Stata (see do file βpanel.doβ)
e. Binary dependent variable models in cross-section
f. Binary dependent variable models with panel data
g. Binary dependent variable models: Examples of firm-level analysis
h. Binary dependent variable models in Stata
i. Count models
j. Count models in Stata
80
-
Content (I)
a. Count models in Stata
80
-
j. Count models in Stata
β’ In stata use the command poisson to do a Poisson regression and xtpoisson when using Poisson in a panel data for which you want to apply fixed-effects, random-effects etc.
β’ Always use the option vce(r) to have robust standard errors
β’ In stata use the command nbreg to do a Negative Binomial regression and xtpoisson when using Negative Binomial in a panel data for which you want to apply fixed-effects, random-effects etc.
β’ Again use the option vce(r) or vce(cluster) for nbregβ¦
β’ β¦while for xtnbreg you can only use bootstrap if you do not want default standard errors
81
-
Content (II)
k. Censoring and truncation
l. Tobit (censored regression) model
m. Alternative estimators for censored regression models
n. Endogeneity
o. Instrumental variables
p. Instrumental variables in practice
q. Endogeneity: example with firm-level analysis
r. Instrumental variables models in Stata
s. Sample selection models
t. Sample selection: An example with firm-level analysis
u. Sample selection models in Stata
82
-
Content (II)
k. Censoring and truncation
82
-
k. Censoring and truncation
β’ Censoring
β’ Truncation
83
-
84
Censoring
β’ We want to estimate the effect of π on a continuous variable π¦β (latent dependent variable)
β’ We always observe π but we observe the dependent variable only above a lower threshold πΏ (censoring from below) or below an upper threshold π (censoring from above)
β’ Censoring from below (or left):
π¦ = π¦β ππ π¦β > πΏπΏ ππ π¦β β€ πΏ
β’ Example: exports by firm π are equal to the export value if the export value exceeds πΏ, or equal to πΏ if the export value is lower than πΏ
β’ Censoring from above (or right):
π¦ = π¦β ππ π¦β < ππ ππ π¦β β₯ π
β’ Example: recorded exports are top-coded at U. Exports by firm π are equal to the export value if the export value is below π, or equal to π if the export value is above π
-
85
Truncation
β’ We want to estimate the effect of π on a continuous variable π¦β (latent dependent variable)
β’ Truncation from below (or left):
π¦ = π¦β ππ π¦β > πΏ
β’ All information below πΏ is lost
β’ Example: exports by firm π are reported only if the export value is larger than πΏ
β’ Truncation from above (or right):
π¦ = π¦β ππ π¦β < π
β’ All information above π is lost
β’ Example: in a consumer survey, only low-income individuals are sampled
-
Content (II)
k. Censoring and truncation
l. Tobit (censored regression) model
m. Alternative estimators for censored regression models
n. Endogeneity
o. Instrumental variables
p. Instrumental variables in practice
q. Endogeneity: example with firm-level analysis
r. Instrumental variables models in Stata
s. Sample selection models
t. Sample selection: An example with firm-level analysis
u. Sample selection models in Stata
86
-
Content (II)
k. Tobit (censored regression) model
86
-
l. Tobit (censored regression) model
β’ Assumptions and estimation
β’ Why OLS estimation is inconsistent
β’ Marginal effects (ME) in Tobit
β’ Problems with Tobit
β’ Tobit model with panel data
β’ Example: academic attitude
87
-
88
Assumptions and estimation
π¦β = πβ²π½ + π
where π βΌ π©(0, π2)
β’ This implies that the latent variable is also normally βΌ : π¦β βΌ π©(πβ²π½, π2)
β’ We observe:
π¦ = π¦β ππ π¦β > 00 ππ π¦β β€ 0
β’ Tobit estimator is a MLE, where the log-likelihood function is detailed, for instance, in Cameron and Trivedi (2005)
-
89
Why OLS estimation is inconsistent
1. OLS estimation on the sample of positive observations:
πΈ π¦|π = πΈ π¦β π, π¦β > 0 = πβ²π½ + πΈ π π, π > βπβ²π½
β’ Under the normality assumption: π|π βΌ π©(0, π2), the second term
becomes πππβ²π½
π, where π β β‘
π β
Ξ¦ β is the inverse Mills ratio
β’ If we run an OLS regression on the sample of positive observations, then
we should also include in the regression the term ππβ²π½
π
β’ A failure to do so will result in an inconsistent estimate of π½ due to omitted variable bias (π β and π are correlated in the selected sub-population)
-
90
Why OLS estimation is inconsistent (ctβd)
2. OLS estimation on the censored sample (zero and positive observations)
πΈ π¦|π = Pr π¦β > 0 Γ πΈ π¦β π, π¦β > 0 = Pr [π > βπβ²π½] πβ²π½ + πΈ π π > βπβ²π½
β’ Under the normality assumption: π βΌ π©(0, π2), the first term is Ξ¦πβ²π½
π
and the term in curly brackets is the same as in the previous slide
β’ There is no way to consistently estimate π½ in a linear regression
-
91
Marginal effects (ME) in Tobit
β’ For the latent variable:
ππΈ[π¦β|π]
ππ₯π= π½π (1)
β’ This is the marginal effect of interest if censoring is just an artifact of data collection (for instance, top- or bottom-coded dependent variable)
β’ In a model of hours worked, (1) is the effect on the desired hours of work
β’ Two other marginal effects can be of interest:
1. ME on actual hours of work for workers: ππΈ[π¦,π¦>0|π]
ππ₯π
2. ME on actual hours of work for workers and non-workers: ππΈ[π¦|π]
ππ₯π
β’ The latter is equal to Ξ¦πβ²π½
ππ½π and can be decomposed in two parts:
β’ Effect on the conditional mean in the uncensored part of the distribution
β’ Effect on the probability that an observation will be positive (not censored)
-
92
Problems with Tobit
β’ Consistency crucially depends on normality and homostkedasticity of errors (and of the latent variable)
β’ The structure is too restrictive: exactly the same variables affecting the probability of a non-zero observation determine the level of a positive observation and, moreover, with the same sign
β’ There are many examples in economics where this implication does not hold
β’ For instance, the intensive and extensive margins of exporting may be affected by different variables
-
93
Tobit model with panel data
β’ With panel data (each individual π is observed π‘ times), the natural extension of the Tobit models is:
π¦ππ‘β = πΌπ + π
β²ππ‘π½ + Ξ΅ππ‘
where πππ‘ βΌ π©(0, π2) and we observe:
π¦ππ‘ = π¦ππ‘β ππ π¦ππ‘
β > 0
0 ππ π¦ππ‘β β€ 0
β’ Due to the incidental parameters problem, fixed effects estimation of π½ is inconsistent, and there is no simple differencing or conditioning method
β’ HonorΓ©βs semiparametric (trimmed LAD) estimator (pantob in Stata)
β’ Random effects estimation assumes that πΌπ~π(0, π2πΌ)
(xttobit, re in Stata)
http://ideas.repec.org/a/ecm/emetrp/v60y1992i3p533-65.html
-
94
Example: academic attitude
β’ Hypothetical data file, with 200 observations
β’ The academic aptitude variable is apt, the reading and math test scores are read and math respectively
β’ The variable prog is the type of program the student is in, it is a categorical (nominal) variable that takes on three values, academic (prog = 1), general (prog = 2), and vocational (prog = 3)
β’ apt is right-censored:
β’ Summarize apt, d
β’ histogram apt, discrete freq
β’ Tobit model with right-censoring at 800:
β’ tobit apt read math i.prog, ul(800) vce(robust)
http://www.ats.ucla.edu/stat/stata/dae/tobit.htm
-
Content (II)
k. Censoring and truncation
l. Tobit (censored regression) model
m. Alternative estimators for censored regression models
n. Endogeneity
o. Instrumental variables
p. Instrumental variables in practice
q. Endogeneity: example with firm-level analysis
r. Instrumental variables models in Stata
s. Sample selection models
t. Sample selection: An example with firm-level analysis
u. Sample selection models in Stata
95
-
Content (II)
k. Alternative estimators for censored regression models
95
-
m. Alternative estimators for censored regression models
β’ Two semi-parametric methods:
1. Censored least absolute deviations (CLAD)
β’ Based on conditional median (clad in Stata)
2. Symmetrically censored least squares (SCLS)
β’ Based on symmetrically trimmed mean (scls in Stata)
96
-
Content (II)
k. Censoring and truncation
l. Tobit (censored regression) model
m. Alternative estimators for censored regression models
n. Endogeneity
o. Instrumental variables
p. Instrumental variables in practice
q. Endogeneity: example with firm-level analysis
r. Instrumental variables models in Stata
s. Sample selection models
t. Sample selection: An example with firm-level analysis
u. Sample selection models in Stata
97
-
Content (II)
k. Endogeneity
97
-
n. Endogeneity
β’ Definition and sources of endogeneity
β’ Inconsistency of OLS
β’ Example with omitted variable bias
98
-
99
Definition and sources of endogeneity
β’ A regressor in endogenous when it is correlated with the error term
β’ Leading examples of endogeneity:
a) Reverse causality
b) Omitted variable bias
c) Measurement error bias
d) Sample selection bias
β’ In case a), there is two-way causal effect between π¦ and π₯. Since π₯ depends on π¦, π₯ is correlated with the error term (endogenous) and π₯
β’ In case b), the omitted variable is included in the error term. If π is correlated with the omitted variable, it is correlated with the error term (endogenous)
β’ In case c), under the classical errors-in-variables (CEV) assumption (measurement error uncorrelated with unobserved variable but correlated with the observed-with-error one), the observed-with-error variable is correlated with the error term (endogenous)
-
100
Inconsistency of OLS
β’ In the model:
π¦ = ππ½ + π’ (1)
The OLS estimator of π½ is consistent is the true model is (1) and if ππππ πβ1πβ²π’ = 0
β’ Then:
ππππ π½ = π½ + ππππ πβ1πβ²π β1ππππ πβ1πβ²π’ = π½
β’ If, however, ππππ πβ1πβ²π’ β 0 (endogeneity), OLS estimator of π½ is inconsistent
β’ The direction of the bias depends on whether correlation between π and
π’ is positive (upward bias, π½ > π½) or negative (π½ < π½)
-
101
Example with omitted variable bias
β’ True model is: π¦ = πβ²π½ + π§πΌ + π
β’ Estimated model is: π¦ = πβ²π½ + π§πΌ + π = πβ²π½ + π
β’ From OLS estimation:
ππππ π½ = π½ + Ξ΄πΌ
Where πΏ = ππππ[ πβ1πβ²π β1 πβ1πβ²π§ ]
β’ If πΏ β 0 (the omitted variable is correlated with the included regressors),
the basic OLS assumption that the error term and the regressors are uncorrelated is violated, and the OLS estimator of π½ will be inconsistent (omitted variable bias)
-
102
Example with omitted variable bias (ctβd)
β’ The direction of the omitted variable bias can be established, knowing what variable is being omitted, how it is correlated with the included regressor and how it may affect the LHS variable
β’ If correlation between the omitted variable and the included regressor is positive (πΏ > 0) and the effect of the omitted variable on π¦ (πΌ) is supposedly positive, πΏπΌ > 0 and the bias is positive
β’ π½ is overestimated
β’ The same is true if both πΏ and πΌ are negative
β’ If πΏ and πΌ have opposite signs, the bias is negative
β’ π½ is underestimated
-
103
Example with omitted variable bias (ctβd)
β’ Standard textbook example: returns to schooling
β’ We want to estimate the effect of schooling on earnings
β’ We omit the variable βabilityβ, on which we do not have informationβ¦
β’ β¦But ability is positively correlated with schooling
β’ OLS regression will yield inconsistent parameter estimates
β’ Since ability should positively affect earnings, the omitted variable bias is positive
β’ OLS of earnings on schooling will overstate the effect of education on earnings
-
Content (II)
k. Censoring and truncation
l. Tobit (censored regression) model
m. Alternative estimators for censored regression models
n. Endogeneity
o. Instrumental variables
p. Instrumental variables in practice
q. Endogeneity: example with firm-level analysis
r. Instrumental variables models in Stata
s. Sample selection models
t. Sample selection: An example with firm-level analysis
u. Sample selection models in Stata
104
-
Content (II)
k. Instrumental variables
104
-
o. Instrumental variables
β’ Visual representation
β’ Definition of an instrument
β’ Examples of instrumental variables
β’ Example with market demand and supply
β’ Instrumental variables in multiple regression
β’ Identification issues
β’ The instrumental variable (IV) estimator
β’ IV estimator as two-stages least squares (2SLS)
105
-
106
Visual representation
β’ OLS is consistent if
β’ OLS is inconsistent if
β’ We need a method to generate exogenous variation in π₯
β’ Randomized experiment is the first bestβ¦
β’ β¦In the absence of randomized experiment, we can use an instrument π§ that has the property that changes in π§ are associated with changes in π₯ but do not lead to changes in y
-
107
Definition of an instrument
β’ A variable π§ is called an instrument (or instrumental variable) for regressor π₯ in the scalar regression model π¦ = π₯π½ + π’ if:
a) π§ is uncorrelated with π’
b) π§ is correlated with π₯
β’ Condition a) excludes π§ from directly affecting π¦
β’ If this was not the case, π§ would be in the error term in a regression of π¦ on π₯, therefore π§ would be correlated with the error term
β’ The instrument should not affect π¦ directly, but only indirectly, through its
effect on π₯
-
108
Examples of instrumental variables
β’ In the returns to schooling example, good candidates for z (uncorrelated with ability β and not directly affecting earnings β and correlated with schooling) are proximity to college and month of birth
β’ In a gravity estimation of the effect of trading time on trade, an instrument is needed that is correlated with trading time and that affects trade only indirectly, through its impact on trading time
β’ Number of administrative formalities (documents)
β’ Trading times using times in neighboring countries
β’ In a gravity estimation of the effect of contract enforcement on trade, an instrument is needed that is correlated with contract enforcement and that affects trade only indirectly, through its impact on contract enforcement
β’ Settlersβ mortality
-
109
Example with market demand and supply
β’ The IV method was originally developed to estimate demand elasticity for agricultural goods, for example milk:
ln ππ‘ = π½0 + π½1 ln ππ‘ + π’π‘
β’ OLS regression of ln ππ‘ on ln ππ‘ suffers from endogeneity bias
β’ Price and quantity are simultaneously determined by the interaction of demand and supply
-
110
Example with market demand and supply (ctβd)
β’ The interaction between demand and supply could reasonably produce something not useful for the purpose of estimating the price elasticity of demand
-
111
Example with market demand and supply (ctβd)
β’ But, what if only supply shifts?
β’ The instrument π is a variable that affects supply but not demand
β’ The IV method estimates the elasticity of the demand curve by isolating shifts in price and quantity that arise from shifts in supply
-
112
Example with market demand and supply (ctβd)
β’ An ideal candidate for π is rainfall in dairy-producing regions:
a) We can reasonably assume that rainfall in dairy-producing regions does not directly affect demand for milk (exogeneity condition)
b) We can reasonably assume that insufficient rainfall lowers food available to cows and milk production as a consequence (relevance condition)
-
113
Instrumental variables in multiple regression
β’ Consider the general regression model:
π¦ = πβ²π½ + π’
Where π is πΎ Γ 1
β’ Some components of π are endogenous (endogenous regressors): ππ
β’ Some components of π are exogenous (exogenous regressors): ππ
β’ Partition π as [πβ²π πβ²π]β²
β’ Instruments are needed for the endogenous regressors (in ππ), while exogenous regressors (in ππ) can be instruments for themselves
β’ Assume we have an R Γ 1 vector of instruments π that satisfies the conditions for being a good instrument
β’ We can then use π³ = [πβ²π πβ²π]β² as an instrument for π = [π
β²π π
β²π]β²
-
114
Identification issues
β’ Identification requires R β₯ K (number of instruments must be at least equal to the number of endogenous regressors)
β’ If R = K, the model is just-identified
β’ For instance, there are two endogenous variables and two instruments)
β’ If R > K, the model is overidentified
β’ For instance, there is one endogenous variable and two instruments
β’ Overidentification is desirable because only if the model is overidentified one can test for instrumentsβ exogeneity and excludability
β’ This is Hansenβs J test (see below)
-
115
The instrumental variable (IV) estimator
β’ For the general model π¦ = ππ½ + π’
where π contains endogenous regressors, construct the matrix of instruments π
β’ For Z to be valid, it must be that:
a) ππππ πβ1πβ²π = Ξ£ππ, a finite matrix of full rank
b) ππππ πβ1πβ²π’ = 0
β’ Premultiply by πβ, apply GLS to obtain:
π½ πΌπ = πβ²πππ
β1πβ²πππ¦ (2)
where ππ = π πβ²π β1πβ² is πβs projection matrix
β’ In the just-identified case, π½ πΌπ = πβ²π β1πβ²π¦ (3)
-
116
The instrumental variable (IV) estimator (ctβd)
β’ The IV estimator is consistent
β’ Take the just-identified case
π½ πΌπ = πβ²π β1πβ² ππ½ + π’
= π½ + (πβ1πβ²π)β1(πβ1πβ²π’)
β’ Under assumptions a) and b) in previous slide, the IV estimator is consistent
β’ The asymptotic VCV matrix is given in Cameron and Trivedi (2005), respectively in expression 4.55 (p. 102) for the estimator in (2) and in expression 4.52 (p. 101) for the estimator in (3)
-
117
IV estimator as two-stages least squares (2SLS)
β’ The IV estimator in (2) can be seen as the result of a double application of least squares:
1. Regress each of the variables in the π matrix on π, and obtain a matrix of fitted values π :
π = πππ
2. Regress y on π to obtain:
π½ 2ππΏπ = π β²π β1πβ² π¦ = πβ²πππ
β1πβ²πππ¦ = π½ πΌπ
-
118
IV estimator as two-stages least squares (2SLS) (ctβd)
β’ Intuitively, the first stage βcleansesβ the endogeneity from the variables we are worried about. By using predicted values based on genuinely exogenous variables only, we obtain the exogenous part of their variation
β’ Consider the example of milk demand:
1. The predicted value of OLS regression of milk price on rainfall is the milk price that isolates changes in price itself due to the supply side of the economy (partially, at least)
2. OLS regression of milk quantity on the predicted milk price is the regression counterpart of using shifts in the supply curve to indentify the demand curve
β’ In practice, avoid doing two stages manually
β’ You will get incorrect standard errors (too small), and you might mistakenly exclude exogenous variables from the main model
β’ IV estimator can also be derived using GMM (one-step, not optimal GMM)
-
Content (II)
k. Censoring and truncation
l. Tobit (censored regression) model
m. Alternative estimators for censored regression models
n. Endogeneity
o. Instrumental variables
p. Instrumental variables in practice
q. Endogeneity: example with firm-level analysis
r. Instrumental variables models in Stata
s. Sample selection models
t. Sample selection: An example with firm-level analysis
u. Sample selection models in Stata
119
-
Content (II)
k. Instrumental variables in practice
119
-
p. Instrumental variables in practice
β’ Overview
β’ Weak instruments
β’ Specification tests
β’ Testing for endogeneity
β’ Testing for overidentifying restrictions
β’ Summing up
120
-
121
Overview
β’ IV estimators are less efficient than the OLS estimator
β’ They are biased in finite samples, even if asymptotically consistent
β’ This finite sample bias is there even in relatively large samples
β’ Most importantly, in the presence of weak instruments, IV estimation estimator can actually produce worse (less consistent) results than simple OLS even in large samples
β’ So the first step in testing must be to ensure that the instruments are strongly enough correlated with the potentially endogenous variables
β’ Specification tests for endogeneity and overidentifying restrictions exist, but they have limitations
β’ In particular, the test of overidentifying restrictions cannot be carried out in a just-identified regression
β’ If instruments are weak, these tests can produce misleading results
-
122
Weak instruments
β’ The weak-instruments problem arises when the correlations between the endogenous regressors and the excluded instruments are non-zero but small
β’ The weak-instruments problem can arise even when the correlations between π and π are significant at conventional levels (5% or 1%) and the researcher is using a large sample
β’ Under weak instruments, even mild endogeneity of the instrument can lead to IV parameter estimates that are much more inconsistent than OLS
β’ Example with one endogenous regressor (π¦ = π₯π½ + π’), one instrument (π§) and iid errors
πππππ½ πΌπ β π½
πππππ½ ππΏπ β π½=ππππ(π§, π’)
ππππ(π₯, π’) Γ
1
ππππ(π§, π₯)
β’ Thus with an invalid instrument ππππ π§, π’ β 0 and low ππππ π§, π₯ the IV estimator can be even more inconsistent than OLS
-
123
Weak instruments (ctβd)
β’ Informal rules of thumb exist to detect weak instruments problems
β’ Partial π 2
β’ Partial πΉ statistics (πΉ test of the excluded instruments in the corresponding first-stage regression) β Staiger and Stockβs rule of thumb is that is should be greater than 10
β’ More formal criteria: Stock-Yogo weak instruments tests (π»0: Instruments are weak)
β’ Comparison on bias of OLS and bias of IV
β’ Wald test
β’ Anderson-Rubin and Stock-Wright tests
β’ Null hypothesis that the coefficients of the endogenous regressors in the structural equation are jointly equal to zero (so that overidentifying restrictions are valid)
β’ These tests are robust to the presence of weak instruments
-
124
Low precision
β’ Although IV estimation can lead to consistent estimation when OLS is inconsistent, it also leads to a loss in precision
β’ Example with one endogenous regressor, one instrument and iid errors
π[π½ πΌπ] =π[π½ ππΏπ]
π2π₯π§
β’ The IV estimator has a larger variance unless ππππ π₯, π§ = 1
β’ If the squared sample correlation coefficient between π§ and π₯ is 0.1, IV standard errors are 10 times those of OLS
β’ Therefore, weak instruments exacerbate the loss in precision
-
125
Testing for endogeneity
β’ Endogeneity test: is there evidence that correlation between the potentially endogenous variables and the error term is strong enough to result in substantively biased OLS estimates?
β’ We can test for the endogeneity of suspect independent variables using a Hausman test
β’ Consider the model
π¦ = πβ²1π½1 + πβ²2π½2 + π’ (4)
where π1 is potentially endogenous and π2 is exogenous
β’ The Hausman test of endogeneity can be calculated by testing πΎ = 0 in the augmented OLS regression
π¦ = πβ²1π½1 + πβ²2π½2 + π β²1πΎ + π’ (5)
or (equivalenty) in the in the augmented OLS regression
π¦ = πβ²1π½1 + πβ²2π½2 + π β²1πΎ + π’ (6)
-
126
Testing for endogeneity (ctβd)
β’ In equation (5), π 1is the predicted value of endogenous regressors π 1 from an OLS regression of π1 on the instruments π
β’ In equation (6), π 1 is the residual from an OLS regression of π1 on the instruments π
β’ Intuitively, if the error term π’ in equation (4) is uncorrelated with π1 and π2, then πΎ = 0
β’ If, instead, the error term π’ in equation (4) is correlated with π1, this will be picked up by significance of additional transformations of π1, such as π 1 (equation 5) or π 1(equation 6)
β’ Rejection of the null hypothesis π»0: πΎ = 0 indicates endogeneity
-
127
Testing for overidentifying restrictions
β’ The instruments must be exogenous for the IV estimator to be consistent. For overidentified models, a test of instrumentsβ exogeneity is possible
β’ This is Hansenβs J test of overidentifying restrictions
β’ Derivations are based on GMM theory and can be found here (pp. 16-18)
β’ Intuitively, in the model π¦ = πβ²π½ + π’, instruments z are valid if πΈ[π’|π] =0 or if πΈ ππ’ = 0
β’ A test of π»0: πΈ ππ’ = 0 is naturally based on departures of πβ1 πππ’ ππ
from zero
β’ In the just-identified case, IV solves πβ1 πππ’ π = 0π so this test is not useful
β’ In the overidentified case, the Hansenβs Jtest is: π½ = π’ β²ππ β1πβ²π’
Where π’ comes from optimal GMM estimation and π is a weighting matrix
../Literature/Baum_etal_2003.pdf
-
128
Testing for overidentifying restrictions (ctβd)
β’ This is an extension of the Sargan test that is robust to heteroskedasticity and clustering
β’ Large J leads to rejection of the null hypothesis that the instruments satisfy orthogonality conditions
β’ This may be due to:
β’ Endogeneity: instruments are correlated with the main equation errors because there is feedback running from the dependent variable to the instruments; and/or
β’ Non-excludability: the instruments should appear in the main regression, and the test is effectively picking up an omitted variables problem
β’ Hansenβs J should not reject the null for instruments to be exogenous and excludable
β’ An important limitation of the J test is that it requires that the investigator believes that at least some instruments are valid
β’ You can also test for subsets of overidentifying restrictions
-
129
Summing up
β’ Always test for instrument relevance first
β’ If instruments are weak, the cure (IV) can be much worse than the disease (inconsistency of OLS)
β’ Use the Hausman test to assess the extent to which endogeneity is really a problem
β’ If at all possible, ensure the model is overidentified, and test exogeneity and excludability via Hansenβs J
β’ If the model passes all of these tests, then it should provide a reasonable guide for causal inference
-
Content (II)
k. Censoring and truncation
l. Tobit (censored regression) model
m. Alternative estimators for censored regression models
n. Endogeneity
o. Instrumental variables
p. Instrumental variables in practice
q. Endogeneity: example with firm-level analysis
r. Instrumental variables models in Stata
s. Sample selection models
t. Sample selection: An example with firm-level analysis
u. Sample selection models in Stata
130
-
Content (II)
k. Endogeneity: example with firm-level analysis
130
-
q. Endogeneity: example with firm-level analysis
β’ Topalova and Khandelwal (2011)
131
../Literature/Topalova_Khandelwal_2011.pdf../Literature/Topalova_Khandelwal_2011.pdf
-
132
The problem
β’ The authors are interested in the effect of trade reform on firm-level productivity in a sample of Indian firms
β’ Endogeneity concerns for the productivity effect of trade policy:
β’ Governments may reduce tariffs only after domestic firms have improved productivity, which would result in a spurious relationship between trade and productivity
β’ Selective protection of industries (tariffs may be adjusted in response to industry productivity levels)
β’ If policy decisions on tariff changes across industries were indeed based on expected future productivity or on industry lobbying, isolating the impact of the tariff changes would be difficult. Simply comparing productivity in liberalized industries to productivity in non-liberalized industries would possibly give a spurious correlation between total factor productivity (TFP) growth and trade policies
-
133
The solution
β’ Since 1991, over a short period of time, India drastically reduced tariffs and narrowed the dispersion in tariffs across sectors
β’ Since the reform was rapid, comprehensive, and externally imposed (IMF), it is reasonable to assume that the changes in the level of protectionism were unrelated to firmβ and industryβlevel productivity
β’ However, at the time the government announced the exportβimport policy in the Ninth Plan (1997β2002), the sweeping reforms outlined in the previous plan had been undertaken and pressure for further reforms from external sources had abated
β’ More difficult to isolate the causal impact of tariff changes
-
134
The solution (ctβd)
β’ The authors address the concern of possible endogeneity of trade policy in 3 ways:
1. Examining the extent to which tariffs moved together
β’ Tariff movements were uniform until 1997 and less uniform afterwards, indicating a more pronounced problem of endogenous trade protection in the second period
2. Testing whether protection correlates with industry characteristics (employment, output, average wage, concentration etc.)
β’ No statistical correlation (indication of exogeneity)
β¦
-
135
The solution (ctβd)
β¦
3. Investigating whether policymakers adjusted tariffs in response to industry's productivity levels
β’ The correlation between future trade protection and current productivity is indistinguishable from zero for the 1989β96 period
β’ The pattern, however, is quite different for the 1997β2001 period. Here, the coefficient on current productivity is negative and significant, suggesting that trade policy may have been adjusted to reflect industriesβ relative performance
-
136
The solution (ctβd)
β’ These tests lead to conclude that trade policy was not endogenously determined during the first period
β’ The 1991 liberalization episode in India is good to examine the causal effects of trade reform on firm-level productivity
-
137
Results
β’ The main result: 10% reduction in tariffs will lead to about 0.5% increase in firm TFP. Decreasing trade protection in the form of lower tariffs raises productivity at the firm level
β’ There are two forces driving this finding
1. Increases in competition resulting from lower output tariffs caused firms to increase their efficiency
2. The trade reform lowered the tariffs on inputs, which lead to an increase in the number and volume of imported inputs from abroad
β’ The larger impact appears to have come from increased access to foreign inputs. Thus, Indiaβs break from import substitution policies not only exposed these firms to competitive pressures, but more importantly, relaxed the technological constraint on production
-
138
Results (ctβd)
β’ Melitz (2003) has shown that trade liberalization may result in a reallocation from lowβ to highβproductivity firms which would increase average productivity because of selection
β’ Re-estimating the equation only for the set of companies in operation in 1996, the positive impact of tariff reductions on productivity levels is virtually unchanged
β’ This constitutes some mild evidence against the selection channel
β’ While the exit of less efficient companies might contribute to productivity improvements, it does not drive the results within this sample
http://web.stanford.edu/~klenow/Melitz.pdf
-
Content (II)
k. Censoring and truncation
l. Tobit (censored regression) model
m. Alternative estimators for censored regression models
n. Endogeneity
o. Instrumental variables
p. Instrumental variables in practice
q. Endogeneity: example with firm-level analysis
r. Instrumental variables models in Stata
s. Sample selection models
t. Sample selection: An example with firm-level analysis
u. Sample selection models in Stata
139
-
Content (II)
k. Instrumental variables models in Stata
139
-
r. Instrumental variables models in Stata
β’ Cross section
β’ Panel data
β’ References
140
-
141
Cross-section
β’ Stata has a built in command for instrumental variables regression, ivregress
β’ ivregress 2sls indepvar depvar1 depvar2 (endogvar1 endogvar2 = iv1 iv2 iv3β¦) β¦, first options
β’ There is a user-developed extension with a number of