what works? evaluating the impact of active labor market policies may 2010, budapest, hungary joost...

Post on 11-Jan-2016

223 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

What Works? Evaluating the Impact of Active Labor Market Policies

May 2010, Budapest, Hungary

Joost de Laat (PhD), Economist, Human Development

Outline

• Why Evidence Based Decision Making?

• Active Labor Market Policies: Summary of Findings

• Where is the Evidence? The Challenge of Evaluating Program Impact

• Ex Ante and Ex Post Evaluation

3

Why Evidence Based Decision Making?

• Limited resources to address needs

• Multiple policy options to address needs

• Rigorous evidence often lacking to prioritize policy options and program elements

4

Active Labor Market Policies:Getting Unemployed into Jobs

• Improve matching of workers and jobs

• Assist in job search• Improve quality of labor supply

• Business training, vocational training• Provide direct labor incentives

• Job creation schemes such as public works

5

Active Labor Market Policies

Public Expenditure as % of GDP in OECD Countries, 2007 (OECD Stat) ACTIVE LABOR MARKET POLICIES 10: PES and administration 0.1520: Training 0.1430: Job rotation and job sharing 0.0040: Employment incentives 0.1050: Supported employment and rehabilitation 0.0960: Direct job creation 0.0570: Start-up incentives 0.01TOTAL ACTIVE 0.56 PASSIVE LABOR MARKET POLICIES 80: Out-of-work income maintenance and support (incl. unemployment insurance) 0.6490: Early retirement 0.11TOTAL PASSIVE 0.75

6

International Evidence on Effectiveness of ALMPs

• Active Labor Market Policy Evaluations: A Meta Analysis. By David Card, Jochen Kluve, and Andrea Weber (2009)• Review of 97 studies between 1995-2007

• The Effectiveness of European Active Labor Market Policy. By Jochen Kluve (2006) • Review of 73 studies between 2002-2005

7

Do ALMPs Help Unemployed Find Work?(Card et al. (2009), Kluve (2006))

• Subsidized public sector employment• Relatively Ineffective

• Job search assistance (often least expensive)• Generally favorable, especially in short run• Combined with sanctions (e.g. UK “New Deal”)

promising

• Classroom and on-the-job training• Not especially favorable in short-run• More positive impacts after 2 years

8

Do ALMPs Help Unemployed Find Work?(Card et al. (2009), Kluve (2006))

• ALMPs targeted at youth• Findings mixed

9

The Impact Evaluation Challenge

• Impact is difference in outcome with and without program for those beneficiaries who participate in the program

• Problem: beneficiaries have only one existence; they participate in the program or they do not.

10

Impact Evaluation Challenge: before – after comparison ok?

before after

$1000

$2000

Skills Training

Program Impact = $1000 extra income?

Income for beneficiary increases from $1000 to $2000 after training

11

Impact Evaluation Challenge: before – after often incorrect

before after

$1000

$2000

NO Skills Training

NO! Program Impact = $500 $1500

Income for the same person but without training would have increased from $1000 to $1500 because of improving economy

12

Impact Evaluation Challenge

•Solution: a proper comparison group

• Comparison outcomes must be identical to treatment group outcomes, if the treatment group did not participate in the program.

13

Impact Evaluation Approaches

Ex ante:1.Randomized evaluations2.Double-difference (DD) methods

Ex post:3. Propensity score matching (PSM)4. Regression discontinuity (RD) design5. Instrumental variable (IV) methods

14

Random assignment

before after

$1000

$2000

Skills Training

Program Impact = $500

$1500

Income comparison group is $1500

Income treatment group is $2000

15

Randomized AssignmentEnsures Proper Comparison Group

• Ensures treatment and comparison at start of program are the same (background and outcomes)

• Any differences that arise after program must be due to the program and not due to selection-bias

• “Gold” standard for evaluations; not always feasible

16

Examples Randomized ALMP Evaluations

• Improve matching of workers and jobs• Counseling the unemployed in France

• Improve quality of labor supply• Providing vocationally focused training for

disadvantaged youth in USA (Job Corps)

• Provide direct labor demand / supply incentives• Canadian Self-Sufficiency Project

17

Challenges to Randomized Designs

•Cost

•Ethical concerns: withholding a potentially beneficial program may be unethical

• Ethical concern must be balanced with:• programs cannot reach all beneficiaries (and randomization may be fairest)

• knowing the program impact may have large potential benefits for society …

18

Societal Benefits

• Rigorous findings lead to scale-up:

•Various US ALMP programs – funding by US Congress contingent on positive IE findings

• Opportunidades (PROGRESA) – Mexico

• Primary school deworming – Kenya

• Balsakhi remedial education – India

19

Ongoing (Randomized) Impact Evaluations:Ongoing (Randomized) Impact Evaluations:From MIT Poverty Action Lab Website (2009)From MIT Poverty Action Lab Website (2009)

20

World Bank’s Development Impact Evaluation Initiative (DIME)

• 12 Impact Evaluation Clusters:• Conditional Cash Transfers• Early Childhood Development• Education Service Delivery• HIV/AIDS Treatment and Prevention• Local Development• Malaria Control• Pay-for-Performance in Health• Rural Roads• Rural Electrification• Urban Upgrading• ALMP and Youth Employment

21

Other Evaluation Approaches

Ex ante:1.Randomized evaluations2.Double-difference (DD) methods

Ex post:3. Propensity score matching (PSM)4. Regression discontinuity (RD) design5. Instrumental variable (IV) methods

22

Non-Randomized Impact Evaluations “Quasi-experimental methods”

•Comparison group constructed by evaluator

• Challenge: evaluator can never be sure if behaviour of comparison group mimics that of treatment group without program: selection bias

23

Example: Suppose Only Very Motivated Underemployed Seek Extra Skills Training

• Data on (very motivated) under-employed individuals who participated in skills training.

• Construct comparison group from (less motivated) under-employed who did not participate in skills training.

• DD method: evaluator compares increase in average incomes between two groups

24

Double-Difference (DD) Method

Treatment group

Comparison group (non-randomization)

Program impact (positive bias)

25

Non-experimental design

•May provide unbiased impact answer•Relies on assumptions regarding comparison•Usually impossible to verify assumptions

•Bias always smaller if evaluator has detailed background variables (covariates)

26

Assessing Validity of Non-Randomized Impact Evaluations

• Verify pre-program characteristics are same between treatment and comparison

• Test ‘impact’ of program on outcome variable that should not be affected by the program

• Note: will always hold in properly designed randomized evaluations

27

Conclusion

•Everything else equal, experimental designs are preferred. Assess case-by-case.•Most appropriate when:

• New program in pilot phase• Not in pilot phase but receives large

amounts of resources and its impact is questioned

•Non-experimental evaluations often cheaper; interpretation of results requires more scrutiny

28

THANK YOU!

29

Impact Evaluation Resources

• World Bank (2010) “Handbook of Impact Evaluations” by Khandker et al.

• www.worldbank.org/sief • www.worldbank.org/dime • www.worldbank.org/impactevaluation • www.worldbank.org/eca/impactevaluation (last site coming soon)

• http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evaluation_en.htm

• www.povertyactionlab.org• http://evidencebasedprograms.org/

top related