evaluating the effectiveness of innovation policies

30
Evaluating the effectiveness of innovation policies Lessons from the evaluation of Latin American Technology Development Funds Micheline Goedhuys [email protected]

Upload: drucilla-terris

Post on 01-Jan-2016

41 views

Category:

Documents


0 download

DESCRIPTION

Evaluating the effectiveness of innovation policies. Lessons from the evaluation of Latin American Technology Development Funds Micheline Goedhuys [email protected]. Structure of presentation. 1. Introduction to the policy evaluation studies: policy background features of TDFs - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Evaluating the effectiveness of innovation policies

Evaluating the effectiveness of innovation policies

Lessons from the evaluation of Latin American Technology Development Funds

Micheline [email protected]

Page 2: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 2

Structure of presentation

1. Introduction to the policy evaluation studies: policy background features of TDFs evaluation setup: outcomes to be evaluated,

data sources 2. Evaluation methodologies:

the evaluation problem addressing selection bias

3. Results from Latin American TDF evaluation: example of results, summary of results, concluding remarks

Mine
Page 3: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 3

1.A. Introduction: Policy background

Constraints to performance in Latin America S&T falling behind in relative terms: small and

declining share in world R&D investment, increasing gap with developed countries, falling behind other emerging economies

Low participation by productive sector in R&D investment: lack of skilled workforce with technical knowledge; macro volatility, financial constraints, weak IPR, low quality of research institutes, lack of mobilized government resources, rentier mentality

Page 4: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 4

1.A. Introduction: Policy background

Policy response: shift in policy

From focus on promotion of scientific research activities, in public research institutes, universities and SOE

To (1990-…) needs of productive sector, with instruments that foster the demand for knowledge by end users and that support the transfer of Know How to firms

TDF emerged as an instrument of S&T policy

Page 5: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 5

1.A. Introduction: Policy background

IDB: evaluating the impact of a sample of IDB S&T programmes and instruments frequently used:

Technology Development Funds (TDF): to stimulate innovation activities in the productive sector, through R&D subsidies

Competitive research grants (CRG) OVE coordinated, compiled results for TDF

evaluation in Argentina, Brazil, Chile, Panama (Colombia)

Page 6: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 6

1.B. Introduction: Selected TDFs

Country and Period Name Tools

Argentina 1994-2001 FONTAR-TMP I Targeted Credit

Argentina 2001-2004 FONTAR ANR Matching Grants

Brazil 1996-2003 ADTEN Targeted Credit

Brazil 1999-2003 FNDCT Matching Grants

Chile 1998-2002 FONTEC-line1 Matching Grants

Panama 2000-2003 FOMOTEC Matching Grants

Page 7: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 7

1.B. Introduction: features of TDFs

Demand driven Subsidy Co-financing Competitive allocation of resources Execution by a specialised agency

Page 8: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 8

1.C. Introduction: evaluation setup

Evaluation of TDFs at recipient (firm) level

Impact on :

R&D input additionality Behaviour additionality Innovative output performance: productivity, employment

and growth thereof

Page 9: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 9

Outcome

Input

Output

Short term Medium term Long term

R&Dinvestment

Internal organization

Innovative Output

Externalrelations

Performance

Page 10: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 10

Indicator Data source

Input additionality

Amount invested by beneficiaries in R&D

Firm balance sheets;

Innovation surveys;

Industrial surveys

Behavioral additionality

Product / process innovation, linkages with other agents in the NIS

Innovation surveys

Innovative Outputs

Patents;

Sales due to new products

Patents databases;

Innovation surveys

Performance Total factor productivity

Labor productivity;

Growth in sales, exports,employment

Firm balance sheets;

Innovation surveys;

Industrial surveys;

Labor surveys

Page 11: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 11

2.A. The evaluation problem (in words)

To measure the impact of a program, the evaluator is interested in the counterfactual question:what would have happened to the beneficiaries ,…if they had not had access to the program

This is however not observed, unknown.

We can only observe the performance of non-beneficiaries and compare it to the performance of beneficiaries.

Page 12: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 12

2.A. The evaluation problem (in words)

This comparison however is not sufficient to tell us the impact of the program, it presents rather correlations, no causality

Why not? Because there may be a range of characteristics

that affect both the possibility of accessing the program AND performing well on the performance indicators (eg R&D intensity, productivity…)

Eg. size of the firm, age, exporting…

Page 13: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 13

2.A. The evaluation problem (in words)

This means, ‘being in the treatment group or not’ is not the result of a random draw, but there is a selection into a specific group, along both observable and non-observable characteristics

The effect of selection has to be taken into account if one wants to measure the impact of the program on the performance of the firms!!

More formally….

Page 14: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 14

2.A. The evaluation problem

Define:

YT = the average expenses in innovation by a firm in a specific year if the firm participates in the TDF and

YC = the average expenses by the same firm if it does not participate to the program.

Measuring the program impact requires a measurement of the difference (YT- YC) which is the effect of having participated in the program for firm i.

Page 15: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 15

2.A. The evaluation problem

Computing (YT- YC) requires knowledge of the counterfactual outcome that is not empirically observable since a firm can not be observed simultaneously as a participant and as a non-participant.

Page 16: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 16

2.A. The evaluation problem

by comparing data on participating and non-participating firms, we can evaluate an average effect of program participation, E[YT- YC]

Substracting and adding E[YC |D=1]

[ | 1] [ | 0]T Cit i it iE Y D E Y D

[ | 1] [ | 0] [ | 1] [ | 1]

[ | 1] [ | 0] [ | 1]

T C C Cit i it i it i it i

T C C Cit it i it i it i

E Y D E Y D E Y D E Y D

E Y Y D E Y D E Y D

Page 17: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 17

2.A. The evaluation problem

Only if there is no selection bias, the average effect of program participation will give an unbiased estimate of the program impact

There is no selection bias, if participating and non-participating firms are similar with respect to dimensions that are likely to affect both the level of innovation expenditures and TDF participation

Eg. Size, age, exporting, solvency… affecting RD expenditures and application for grant

Page 18: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 18

2.B. The evaluation problem avoided

Incorporating randomized evaluation in programme design

Random assignment of treatment (participation in the program) would imply that there are no pre-existing differences between the treated and non-treated firms, selection bias is zero

Hard to implement for certain types of policy instruments

Page 19: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 19

2.B. Controlling for selection bias

Controlling for observable differences Develop a statistically robust control group of non-

beneficiaries identify comparable participating and non-

participating firms, conditional on a set of observable variables X,

i.o.w.: control for the pre-existing observable differences

using econometric techniques:

e.g. propensity score matching

Page 20: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 20

2.B. Propensity score matching (PSM)

If there is only one dimension (eg size) that affects both treatment (participation in TDF) and outcome (R&D intensity) , it would be relatively simple to find pairs of matching firms.

When treatment and outcome are determined by a multidimensional vector of characteristics (size, age, industry, location...), this becomes problematic.

Find pairs of firms that have equal or similar probability of being treated (having TDF support)

Page 21: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 21

2.B. PSM

Using probit or logit analysis on the whole sample of beneficiaries and non-beneficiaries, we calculate the probability (P) or propensity that a firm participates in a program

P(D=1)=F(X)

X= vector of observable characteristics Purpose: to find for each participant (D=1) at least

one program non-participant that has equal/very similar chance of being participant, which is then selected into the control group.

Page 22: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 22

2.B. PSM

It reduces the multidimensional problem of several matching criteria to one single measure of distance

There are several measures of proximity:

Eg nearest neighbour, predefined range, kernel – based matching ...

Page 23: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 23

2.B. PSM

Estimating the impact (Average effect of Treatment on Treated):

ATT=E[E(Y1 | D = 1, p(x)) –E(Y0 | D = 0, p(x))| D=1 ]

Y is the impact variable

D = {0,1} is a dummy variable for the participation in the program,

x is a vector of pre-treatment characteristics

p(x) is the propensity score.

Page 24: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 24

2.B. Difference in difference (DID)

The treated and control group of firms may also differ in non-observable characteristics, eg management skills.

If panel data are available (data of pre-treatment and post-treatment time periods) the impact of unobservable differences and time shocks can be neutralised by taking the difference-in-differences of the impact variable.

Important assumption: unobservables do not change over time

In case of DID, the impact variable is a growth rate.

Page 25: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 25

3. Example of results

Impact of ADTEN (Brazil) on (private) R&D intensity

Single difference in 2000

[(RD/sales 2000 beneficiaries –

RD/sales 2000 control)] after PSM

92 observations each beneficiaries 1.18% Control group 0.52% Difference: 0.66% positive and significant impact,net of subsidy

Page 26: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 26

3. Example of results

Impact of FONTAR-ANR (Argentina)

on (public+private) R&D intensity (=R&D expenditures/sales)

Difference in difference with PSM

37 observations each

[(RDint. afterANR beneficiaries –RD/sales beforeANR ben.)-

RD/sales afterANR control-RD/Sales beforeANR control)] Beneficiaries (0.20- 0.08) = 0.12 Control group (0.15 - 0.22) = -0.07 DID 0.19

positive and significant impact, GROSS of subsidy

Page 27: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 27

3. Results: summary

The impact of the programs on firm behaviour and outcomes becomes weaker and weaker as one gets further from the immediate target of the policy instrument:

There is clear evidence of a positive impact on R&D,

weaker evidence of some behavioural effects, and almost no evidence of an immediate positive

impact on new product sales or patents. This may be expected, given the relatively short

time span over which the impacts were measured.

Page 28: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 28

3. Results

no clear evidence that the TDF can significantly affect firms’ productivity and competitiveness within a five-year period, although there is a suggestion of positive impacts.

However, these outcomes, which are often the general objective of the programs, are more likely related to a longer run impact of policy.

The evaluation does not take into account potential positive externalities that may result from the TDF.

Page 29: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 29

3. Results

the evaluation design should clearly identify: rationale short, medium and long run expected outcomes; periodic collection of primary data on the programs’

beneficiaries and on a group of comparable non-beneficiaries;

the repetition of evaluation on the same sample so that long run impacts can be clearly identified;

the periodic repetition of the impact evaluation on new samples to identify potential needs of re-targeting of policy tools.

Page 30: Evaluating the effectiveness of innovation policies

June 12 2008 DEIP, Amman June, 10-12 2008 30

3. Concluding remarks

The data needs of this type of evaluation are evident

Involvement and commitment of statistical offices is needed to be able to merge survey data that allow these analyses

The merger and accessability of several data sources create unprecedented opportunities for the evaluation and monitoring of policy instruments

Thank you!