· web viewthe goal of this study is to help evaluation units in their evolution from mere...

107
FINAL REPORT EVALUATION IN V4+4 COUNTRIES - AN OVERVIEW OF PRACTICES Karol Olejniczak, Tomasz Kupiec, Dominika Wojtowicz, Weronika Felcis

Upload: others

Post on 04-Aug-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEVALUATION IN V4+4 COUNTRIES

- AN OVERVIEW OF PRACTICES

Karol Olejniczak, Tomasz Kupiec, Dominika Wojtowicz, Weronika Felcis

Warsaw, October 2017

Page 2:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

TABLE OF CONTENTS

TABLE OF CONTENTS..................................................................................................................................... 2

ACRONYMS........................................................................................................................................................ 3

EXECUTIVE SUMMARY.................................................................................................................................. 4

INTRODUCTION............................................................................................................................................... 7

1 THE BIG PICTURE: BROKERS IN THE COHESION POLICY SYSTEM..........................................9

1.1 THE LOGIC OF THE COHESION POLICY SYSTEM...................................................................................................9

1.2 THE COHESION POLICY SYSTEM: THE EC PERSPECTIVE.................................................................................11

1.3 THE SCALE AND ORGANIZATION OF SYSTEMS...................................................................................................18

1.4 BROKERS ROLE IN THE SYSTEM...........................................................................................................................23

2 DETAILED ACTIVITIES OF KNOWLEDGE BROKERS...................................................................27

2.1 IDENTIFYING THE KNOWLEDGE NEEDS OF USERS............................................................................................27

2.2 ACQUIRING KNOWLEDGE......................................................................................................................................29

2.3 DISSEMINATING KNOWLEDGE..............................................................................................................................32

2.4 ACCUMULATING KNOWLEDGE..............................................................................................................................35

2.5 NETWORKING AND BUILDING AN EVIDENCE CULTURE....................................................................................38

3 PRODUCTS, USERS AND IMPACT......................................................................................................41

4 FACTORS THAT INFLUENCE BROKERS' PERFORMANCE..........................................................47

4.1 CAPACITIES OF EVALUATION UNITS...................................................................................................................47

4.2 THE ENVIRONMENT OF EVALUATION UNITS....................................................................................................49

4.3 SYSTEMS STRENGTHS AND WEAKNESSES...........................................................................................................52

5 IMPLICATIONS FOR PRACTICE......................................................................................................... 53

REFERENCES.................................................................................................................................................. 64

LIST OF TABLES AND FIGURES.................................................................................................................66

ANNEXES.......................................................................................................................................................... 67

2

Page 3:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

ACRONYMS

Acronym Definition

BUL Bulgaria

CAWI_EU Computer-Assisted Web Interview of evaluation unitsCAWI_U Computer-Assisted Web Interview of knowledge users

CF Cohesion FundCP Cohesion Policy

CRIE Centre for Research on Impact EvaluationCRO Croatia

CZE Czech RepublicEAFRD European Agricultural Fund for Rural Development

EC European CommissionERDF European Regional Development Fund

ESF European Social FundESIF European Structural and Investment Funds

EU European UnionHUN Hungary

IB Intermediate BodyIDI Individual In-depth Interview

IT Information TechnologiesMA Managing Authority

MS Member StateNOD Central coordinating body for national level CP evaluation system

OP Operational ProgrammePOL Poland

ROM RomaniaSLO Slovenia

SVK SlovakiaTA Technical Assistance

TOR Terms of Reference

3

Page 4:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

EXECUTIVE SUMMARY

What the study is about

The goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators of reflexive policy learning that support decision makers with research-based knowledge. Thanks to this learning process/knowledge input, Cohesion Policy programs and projects are more effective in developing communities and serving citizens.We address three questions: (1) to what extent do evaluation units perform knowledge brokering activities at the moment, (2) what currently enables or limits their performance, and (3) what solutions could be introduced, in terms of capacity-building practices, knowledge delivery strategies and modifications of the regulatory framework, to improve the performance of evaluation units acting as Knowledge Brokers.

The study is based on: a survey of 74 staff from evaluation units in Bulgaria, Croatia, the Czech Republic, Hungary, Poland, Romania, Slovakia and Slovenia1; a survey of selected users of evaluation reports in each country; 27 interviews with representatives of leading evaluation units, local evaluation experts or members of evaluation societies in each country; 7 interviews with European Commission representatives; and desk research of existing English-language sources.

What we have found out

Chapter 1 clarifies the role and position of evaluation units in the cohesion policy system. The evaluation culture of CP is generally more developed and sophisticated than that of other EU or domestically funded policies, which is reflected in the large number of evaluation studies produced by MS and by the EC itself. Evaluation results constitute one source of knowledge used in evidence based policy programming and implementation, and serve to inform EU institutions, European citizens and other stakeholders on CP effectiveness.

With the introduction of each new implementation period, regulations shaping the evaluation system of programs realized under CP have been changed. These include issues legally sanctioned under the Regulation and actions undertaken by the EC. The modifications, which have been introduced gradually, have brought about a “positive revolution” and the 2014-2020 system has been significantly improved compared to previous programming periods. . The main modifications include: a shift in emphasis from process-orientation to results-orientation in evaluations, the introduction of Evaluation Plans, the adaptation of solutions providing more effective communication and follow-up on evaluation findings.

The direction of changes set out by the legal framework for 2014-2020 is promising. The obligations imposed on MS provide a basis for ensuring structured and coherent actions in conducting evaluations. Activities taken up by the EC (Helpdesk, promotion of best practices), including support offered to MS, as well as efforts undertaken by MS to develop an evaluation culture should help to improve the quality of evaluations and increase the utility of their findings. Discussion of any further changes should be postponed until it is possible to accurately determine and assess the effectiveness of the solutions already introduced for the period 2014-2020.

1 Out of 80 staff approached – response rate 93%

4

Page 5:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Significant differences in CP transfers apply to variation in evaluation systems, which are dissimilar in terms of the degree of centralization as well as in the number and the potential of the units involved in the evaluation. Centralized evaluation systems function in Hungary, Slovenia and Romania, while decentralized evaluation systems have been adopted in the biggest CP beneficiary, Poland, as well as in Bulgaria, the Czech Republic and Slovakia - countries receiving significantly smaller amounts under CP. The role of decentralized evaluation systems varies in each of these countries, , as it is assigned and executed by their central evaluation units. Their functions may be limited to the provision of information and ensuring conformity to EC requirements, leaving the MAs free to decide on the methods used for evaluation, dissemination of results and the organization of follow-up processes etc. (in the case of Bulgaria) or may be enhanced by different supporting activities (in the case of Poland, the Czech Republic and Slovakia).

Chapter 2 provides detailed information on the activities of knowledge brokers. The fundaments of knowledge brokering depend on: identifying knowledge needs, acquiring knowledge and disseminating results. However, in order to become connectors of producers and users of knowledge, to be translators of sector-specific research reports into useful and targeted evidence for decision making as well as to be facilitators of evaluation culture, the evaluation units also depend on accurate resources, capacity-building activities and a good network of stakeholders.

Additional constraints posed on evaluation units such as unstable institutional structures, changing procurement law, the influence of political interests, blurred policy cycles and the limited interest in accountability of many upper managers, temper their aspirations and creativity in seeking a role as knowledge brokers.

This is not to say that the evaluation units we encountered could not better focus their self-reflection, more strategically develop their role within departments and reach out more effectively to knowledge users. They should. Moreover, explicit expression of the needs for knowledge and constructive debates with researchers on how to improve the process of co-constructing evaluation products are surely additional aims for the future.

The study has highlighted two more important issues. Firstly, networking is still challenging in all of the countries involved in this research due to the general low trust culture and power relations involved in the supply-demand exchange. Secondly, dissemination of results is often routinely constructed and knowledge from studies is not appropriately accumulated over time and nor made sufficiently accessible, comparable and digestible into absorbable evidence. Evaluation units should seek to become knowledge hubs not only on the basis of the external studies they commission, but also owing to their own internal research and analysis capabilities.

The general observation is that evaluation unit employees often feel trapped in a vicious circle of wanting to become more creative and innovative in the methods they use for brokering the knowledge from evaluation, but as yet being constrained by limited resources, little interest from top management and occasionally by the unsatisfactory quality of research, all of which limit the use of evidence.

It is important for knowledge brokers to reflect on possible ways to feed in the results of research into the learning systems of institutions, how to open up forums for knowledge exchange (e.g. by regular meetings on the institutional level, social media management programs) and accumulating knowledge in a way that would make it responsive to critical questions at different moments (eg. by creating knowledge clinics - Olejniczak, K., Raimondo, E., & Kupiec, T., 2016).

5

Page 6:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Chapter 3 discusses products provided by evaluation units, users of evaluative knowledge and its impact. The key types of users which evaluation units focus on are the managers of other units and senior staff of their institutions, and thus an inward orientation. Evaluation units believe they provide comprehensive and balanced stream of all necessary types of knowledge about processes, effects, and the mechanisms of change. This is not entirely in accordance with the view of coordinating bodies, and – what is more important – users. They argue that they most frequently receive process knowledge, while what they need most is knowledge about mechanisms of change.

Since evaluation does not meet all needs (but certainly not only because of that), users rely on different sources of information. The most popular source, irrespective of knowledge type, is own experience and discussions with colleagues. The second choice is usually monitoring. Evaluation, depending on knowledge type: process, effects, mechanisms, was placed respectively 7th, 3rd and 4th, and was never indicated by more than 57% of respondents as an important source.

Chapter 4 analyses factors that influence the performance of evaluation units acting as knowledge brokers. Asked about internal capacity, the large majority of evaluation units admitted that the budget they possess is sufficient and not a limitation for evaluation activities. They also see their skills and experience as a strength (coordinating bodies do not always agree), but it is compromised by the insufficient number of staff and time available.

In terms of the external environment, evaluation units mostly praise and criticize the regulatory framework. Praise is given to the EU regulation which is seen as a key driver for the development of evaluation practice. Criticism is aimed at domestic regulations, and almost exclusively, public procurement law. The majority of CAWI respondents also believe that users perceive their work as credible, but again, coordinating bodies are less optimistic about this as well as the attitude toward evidence based decision-making. The issue of external producers’ capacity arose as a barrier, but as experts impart, this is often a result of insufficient demand (both in terms of quality and quantity).

What conclusions and recommendations for the future can be made

In the last part we discuss implications for practice. We propose transforming evaluation units into knowledge brokers - the animators of reflexive policy learning that support decision makers with research-based knowledge. We propose transformation that is incremental. That means building on existing elements, within existing regulatory and institutional frameworks, and aligned with the current responsibilities, activities and evolution of evaluation units. The transformation follows five-steps: (1) focusing on users of knowledge, (2) mapping the decision-making journey of users in order to make knowledge timely, (3) making knowledge interesting for users by developing learning portfolios, (4) strengthening the credibility of studies by co-designing solutions and (5) making learning easier to access. Thanks to these steps, Cohesion Policy programs and projects could be more effective in developing communities and serving citizens.

6

Page 7:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

INTRODUCTION

In order to run Cohesion Policy interventions successfully, policy practitioners need research-based knowledge on the nature of policy problems, the possible solutions, smooth ways of implementing them in local contexts, and feedback on which solution really worked for the development of local and regional communities.

Evaluation studies could provide practitioners with such an insight that is indispensable for the effective design and implementation of Cohesion Policy. However, current research indicates that despite extensive production of evaluation reports, the practitioners implementing Cohesion Policy still have limited insight into "what works, for whom, in what context, and why" (Olejniczak, 2013; Wojtowicz and Kupiec, 2016; Kupiec, 2016).

Recent literature on evidence use in public policies argues that bringing credible and rigorous evidence to decision makers is not sufficient; the evidence needs to be ‘brokered’ (Olejniczak et al. 2016), that means, translated into a concrete organizational context and policy practice. This is because decision makers and researchers are driven by different imperatives and time frames, using different languages. Studies point to "knowledge brokering" as an effective way of addressing this challenge (Meyer, 2010; Olejniczak et al. 2016).

Evaluation Units, due to their formal responsibility for policy analysis and assessment, are predestined to perform the role of Knowledge Brokers in the Cohesion Policy implementation system. They can improve the policy design and implementation process by streaming policy-relevant knowledge between knowledge producers (researchers, experts) and users (decision-makers, project managers).

The GOAL OF THIS STUDY is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators of reflexive policy learning that support decision makers with research-based knowledge. Thanks to this learning, Cohesion Policy programs and projects can be more effective in developing communities and serving citizens.

This study ADDRESSES THREE QUESTIONS:

(1) To what extent do evaluation units perform knowledge brokering activities at the moment?

(2) What currently enables or limits their performance?

(3) What solutions could be introduced, in terms of capacity-building practices, knowledge delivery strategies and modifications of the regulatory framework, to improve the performance of evaluation units in acting as Knowledge Brokers?

The STUDY COVERS Cohesion Policy systems in V4+4 countries: Bulgaria, Croatia, the Czech Republic, Hungary, Poland, Romania, Slovakia and Slovenia. The territorial scope of the study was determined by the willingness of the states mentioned to participate. However, it is important to mention that these countries share quite similar paths of evaluation practice development, with evaluation being enforced by the regulations after joining the EU, and remaining limited almost exclusively to CP programs.

The time period covered by the study covers the situation during last year (year 2016). However, in order to understand current state of the art, references will be made to the evolution of systems since accession to the European Union. We analyzed evaluation systems in the domestic and European environment, in relation to knowledge users and a network of other actors of public policy. Thus, references will be made to domestic policies,

7

Page 8:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

which can be regarded as a context in which Cohesion Policy is embedded. Evaluation practices of other EU policies (e.g. EAFRD) will be covered by analysis of the context in which brokers operate and comparison with CP.

The analysis is based on the following DATA SOURCES: a survey of heads of all Cohesion Policy evaluation units in eight countries; a survey of selected users of evaluation repots in each country; 27 interviews with representatives of leading evaluation units, local evaluation experts or members of evaluation societies in each country; interviews with European Commission representatives; and desk research of existing English-language sources. The full methodology of our analysis is discussed in the Annex.

The report is divided into five parts:

Part 1 positions evaluation units in the overall Cohesion Policy system. We provide an overview of the Cohesion Policy process and regulatory framework that form the basis for the operations of national systems. We explain the perspective of the European Commission - its vision of the role of evaluation and evaluation units in the Cohesion Policy. We then show comparisons among countries in order to highlight the substantial differences in the scale and organization of the Cohesion Policy system as well as the size, position and role of the evaluation units acting as knowledge brokers.

Part 2 presents the main activities performed by evaluation units. Based on the survey and interview findings we discuss typical as well as less common practices of evaluation units in terms of identifying knowledge needs, acquiring, disseminating and accumulating knowledge, and building an evidence-based culture in national administrations.

Part 3 focuses on the results of evaluation units’ work, both in terms of the knowledge produced and feedback received from the potential users of evaluation results. It should be noted that this part is mostly built on the perceptions and declarations of the brokers themselves. We tried to balance this with desk research (limited due to the language issues), and with the voice of knowledge users (their response was very limited which itself could be an indicator of their interest in evaluation).

Part 4 explores the main factors that positively or negatively influence the work of evaluation units. We distinguished between capacities - this means, factors that can be controlled by brokers because they relate to their own characteristics, and environmental factors that are beyond the control or sometimes even influence of evaluation units.

Part 5 sums up our observations and offers implications for future practice. We propose transforming evaluation units into knowledge brokers and we explain the core of brokers' logic for influencing users the UTILE strategy. We then explain how evaluation units can progress towards knowledge brokering in five steps: focusing on users, understanding users’ journey, developing a learning portfolio, co-designing solutions, and making knowledge easy to access.

8

Page 9:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

1 THE BIG PICTURE: BROKERS IN THE COHESION POLICY SYSTEM

1.1 THE LOGIC OF THE COHESION POLICY SYSTEM

Policy implementation is a complex process that has been discussed in the management literature since well before the introduction of the European Union Cohesion Policy (May 2003). Looking beyond detailed arrangements of national systems, we can see a bigger picture of the general logic of Cohesion Policy delivery. Figure 1 presents this big picture, allowing us to see where knowledge brokers are placed in the system, who they could serve and how.

Figure 1 The universal logic of policy delivery

Source: Own work based on (Ostrom, 2005)

The overall logic is that public funds (EUR) are transferred, in the form of monetary aid or service activities, through the policy implementation system to certain target groups in society. In a favorable socio-economic context, this aid should help beneficiaries start doing things differently, and this should eventually lead to a positive, sustainable socio-economic change in local or regional communities. Thus, the ultimate goal of policy is desirable socio-economic change that responds to certain challenges and problems. And public funds are used to modify the behaviors of targeted beneficiaries to bring about a positive change.

The system of policy implementation is an institutional and procedural mechanism of public administration responsible for targeting the most promising beneficiaries and delivering aid smoothly (legally and on time). As we can see in Figure 1, public institutions

9

Page 10:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

assigned to the policy implementation system can engage in three groups of processes. "Strategic planning" aims at producing strategic documents, objectives and targets for interventions. It encompasses activities such as: (a) diagnosis and planning, (b) consultation and negotiations, and (c) coordination and alignment with the changing environment.

"Operational processes" focus on spending and absorbing financial aid in a timely and legal manner. These cover sub-processes of (a) information and promotion given to beneficiaries - potential project applicants, (b) application and selection of the most promising project beneficiaries, and (c) financial management.

Finally, "Knowledge delivery" activities aim at producing knowledge to improve the system operations (single loop learning) and to gain better understanding of the socio-economic phenomena that are addressed by the Cohesion Policy (double loop learning) (Argyris, Schon 1995; Fiol, Lyles 1985). They encompass: evaluation, monitoring, performance audit, acquisition of expertise and other sources.

The outcomes of policy delivery are measured by indicators. The ultimate success indicator is positive socio-economic change. However, the observable effects of change are often delayed in time. Assessing policy delivery by its final outcomes is difficult. Thus, policy actors use more process-oriented indicators: the level of fund absorption and the number of products delivered (in terms of infrastructure built, services provided, projects executed). They assume that the timely and legal use of public money by beneficiaries is a proxy for successful policy delivery. In practice, these indicators say a lot about the efficiency of operational processes of but little about the accuracy of the strategic orientation and utility of the policy. The last indicator measures "knowledge gains" - lessons learnt and mistakes that have been corrected or avoided over time. They can be used in the future for planning the next generation of policies and programs.

Stakeholders assess policy delivery and provide feedback to institutions of the implementation system. In the case of Cohesion Policy these stakeholders are countries that are net payers of the policy, public opinion of the EU member states, media, interest groups as well as European institutions - namely the European Commission and the European Parliament.

Evaluation units acting as knowledge brokers are part of the "Knowledge delivery" sets of processes within the implementation system. They can provide information and evidence-based support to three different types of knowledge users.

The first group are users working on strategic processes. These are high level, elected decision-makers as well as senior civil servants responsible for building a strategic vision. Within public administration they are assisted by the staff of programming and coordination units. Their main knowledge needs are understanding the nature of the policy problem (diagnostic issues), deciding which stakeholders should be involved, identifying what could potentially be the most effective change mechanism and discerning which intervention types triggered it in the past (knowing what works and why).

The second group are operational users. These are managers responsible for policy implementation: project promotion, selection, financial flows and products. Their focus is on figuring out the most effective ways to deliver aid (know how) and on removing possible bottlenecks. But sometimes they also need strategic insight to understand what flaws in logic or changes in the program context can affect their performance.

The third group are stakeholders. This group is the most diversified one. It includes high level decision makers (politicians), media and public opinion, as well as EC and net payer

10

Page 11:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

countries. Their knowledge needs are quite straightforward - they need to know if the public aid has provided the expected positive socio-economic change.

1.2 THE COHESION POLICY SYSTEM: THE EC PERSPECTIVE

Cohesion Policy has always been comprehensively scrutinized, mostly due to the size of the budget allocated, but also to the fact that it covers activities under many other EU policies (e.g. relating to innovations, SMEs, industrial policy). It has experienced the emergence of an evaluation culture that is generally more developed and sophisticated than that of other EU or domestically funded regional development policies, which is reflected in the large number of evaluation studies produced by MS and by EC itself (IDI_EC, see also: Fratesi, Wishlade 2017).

From the EC perspective, CP evaluation has several key functions, including:

Knowledge feeding evidence-based policy

The Better Regulation principles state that the evidence based policy concept should be an innate element of the decision making process in the European Union. Evaluation results constitute one of the sources used in evidence based policy programming and implementation. Evaluations provide information to assess how a specific intervention has been performed to ensure that the objectives set by the EU are met. They draw “conclusions on whether the EU intervention continues to be justified or should be modified to improve its effectiveness, relevance and coherence and/or to eliminate excessive burdens or inconsistencies or simply be repealed” (European Commission, COMMISSION STAFF WORKING DOCUMENT (2017) 350, Better Regulation Guidelines). Evaluation also provides the opportunity to look for the unintended and/or unexpected effects of EU action.

The knowledge gathered via evaluation studies supports decision-making, contributing to strategic planning and to the design of future interventions. A continuous flow of information is needed as – due to the “period gap” - the knowledge extracted from ex post evaluations cannot feed the discussion on the shape of programs which are planned to be implemented in the forthcoming period. Thus, the evaluations, along with other sources of information, serve to collect and analyze information on the performance of CP and provide evidence that can be used to ensure delivery of the most efficient CP interventions.

The use of evaluation findings differs according to the stage of the CP implementation in question:

ex ante evaluations – an instrument for assessing and negotiating in the process of shaping operational programs which are to be implemented in a given seven year period. As one of the evidence sources, they provide the European Commission with information on the context, rationales and options, which have been analyzed within the OP preparation process.

on-going evaluations –instruments supporting the implementation process by different stakeholders in the Member States. Process evaluations should serve MAs as a tool for effective implementation and organizational learning. The evaluation findings are

11

Page 12:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

used to improve the quality of an on-going intervention. Evaluations should not justify areas for improvement. However, the EC exploits the results, as information from on-going evaluations is one of the sources of knowledge used to monitor the proper implementation of CP. In the case of planning modifications in an operational program, the MA delivers an evaluation presenting the prospective effects and justifies the changes to be introduced.

ex post evaluations - the EC is responsible for conducting these evaluations which are intended as: “a more global, broader overview on CP implementation results and effects” (IDI_EC). They provide aggregated knowledge that enables comparisons between MS. The ex post evaluation knowledge is complementary to information delivered continuously within annual implementation reports, individual ongoing evaluations, monitoring data, etc. Ex post evaluations are used to report to the EU institutions (The Council, The European Parliament, The Court of Audit and others) and to inform European citizens on CP efficiency.

In particular cases, the EC requests additional information, which requires conducting evaluation studies on specific issues. One example might be the evaluation of intervention foreseen for health system development in Poland. The investments in Polish health are dispersed over 18 Ops, which hampers precise assessment of EU support in this sector. The EC, along with the Polish side, worked out a National Coordination Mechanism and Poland was asked to deliver evaluation on its efficiency. Another example relates to Hungary. From the monitoring review, the EC discerned that in one priority Hungary reached the planned indicators two years before the end of the implementation period. Hence, the EC asked the MA for some extra analyses to discover the reason for this deviation. Based on the evaluation findings, a decision was taken to make modifications.

Intermittently, the evaluation findings provided in reports communicated to the EC feed the discussion which goes beyond CP, mostly concerning environmental issues (e.g. evaluation on the replacement of coal-fired stoves in the Czech Republic or on the energy sector in Poland). These specific evaluation results are forwarded to other EU institutions (e.g. other DG Units, other DGs, agencies, etc.) and/or interested stakeholders.

Transparency and accountability

Different stakeholders and the general public are systematically informed on what the EU has done and achieved within CP. Once a year, the EC is obliged to deliver to the Parliament and to the Council a synthesis of the evaluations executed in the member states. The EC uses evaluation results to inform different stakeholders on CP performance: it is obliged to summarize the information from evaluation results executed in MS and provide it to different stakeholders (mainly the European Parliament). The commission provides an account of its actions to all interested stakeholders and EU citizens.

2014-2020 evaluation system and rationale behind modifications

With the introduction of each new implementation period, regulations shaping the evaluation system of programs realized under CP have been changed. The general impressions of our respondents on the modifications, which have been adopted gradually, are that a “positive

12

Page 13:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

revolution” has taken place and - as one respondent stated – “there is a huge difference between the evaluation system now and the one that functioned a decade ago” (IDI_EC).

The rationales behind modifications introduced in the current implementation period were basically threefold. Firstly, a revision of Regulation was introduced in response to identified weaknesses in the 2007-2013-evaluation system. The lessons learnt include the following:

Evaluations were not capturing the effects of interventions, since they focused mostly on implementation issues (a predominance of process evaluations over strategic and impact evaluations),

There was no clear vision on how the evaluation results should be used to change program implementation,

There were no guidelines for communicating the evaluation results to the general public and thus a large number of the evaluation reports were never made available to potentially interested stakeholders.

Secondly, the EC decided to formalize solutions which had been adopted in 2007-2013 by some Member States and which were assessed as good practices (i.e. preparation of evaluation plans and procedures for the evaluation results follow-up process).

Last but not least, the obligations imposed by the Regulation comply with the principles of Better Regulation, which “is a way of working to ensure that political decisions are prepared in an open, transparent manner, informed by the best available evidence and backed by the comprehensive involvement of stakeholders” (European Commission, COMMISSION STAFF WORKING DOCUMENT (2017) 350, Better Regulation Guidelines, page 4).

Modifications of the CP evaluation system introduced for 2014-2020 concern issues that are legally sanctioned under the Regulation as well as practical solutions and actions undertaken by the EC to ensure the effectiveness of the system.

One of the main changes made involves a strategic change in the approach to the evaluation function. As noted above, the overbalanced number of process evaluations delivered by MS in the previous implementation period has prompted the EC to reflect on the role of evaluation in strategic planning. In consequence, the legislative framework for Cohesion Policy 2014-2020 shifts the emphasis from process to results-oriented evaluations. The EC guidance on evaluation promotes two main methodological approaches to be adopted within evaluation studies: theory-based impact evaluation, and counterfactual impact evaluation. In line with these, more emphasis is placed also on monitoring and reporting. Taking into account the declarations included in the evaluation plans approved, a total number of 2084 impact evaluations are expected to be delivered by all MS within 2014-2020 (IDI_EC).

The main requirements imposed on MA include (Regulation EU No 1303/2013 of the European Parliament and of the Council, art.54- 56):

Drafting Evaluation Plans . Plans should be submitted for discussion and final approval to the Program Monitoring Committee. They should include process evaluations, concentrated on issues related to the management of the

13

Page 14:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

implementation process and impact evaluations, focused on the effectiveness and impact of interventions undertaken under programs. The Plans can be updated according to needs emerging during the entire lifecycle of programs. The plans support the coordination of evaluations in MS and provide open space for a debate on what is needed. They also try to enable Member States to prepare for evaluations that they are committed to commission (especially in terms of counterfactual analysis). As one respondent stressed: “The Evaluation Plans give MAs a better overview on what will be delivered (…). Now, this system of gathering results for MS is more systematic compared to the previous period.” “The obligation to prepare evaluation plans has forced member states to think about evaluation in more strategic terms. Now we have to think what kind of data we will need and what kind of questions will we ask. It’s a very different way of thinking.” (IDI_EC). The EC provides support to MAs in preparing evaluation plans, giving its comments on the scope and methodology proposed in the plan (for example, in terms of methodology, they have an agreement with a joint research Center in order to provide methodological support for developing counterfactual evaluations). It should be stressed that the regulations on evaluation plans are not very tight, leaving space for managing authorities to decide on the timing, scope and methodology of evaluations conducted. Hence, MSs may decide to be more ambitious and to go beyond what is required by the Regulation; the EC encourages them to do so.

Providing a synthesis of the findings of all evaluations conducted during the previous year. In the Annual Implementation Reports, the MAs synthesize and present the conclusions of all evaluations of the program that have become available during the previous financial year, pointing out issues that may influence the implementation of OP (Article 50 CPR). Measures planned to be undertaken in the case of such issues arising should be included too. This practice “gives more meaning to evaluations” (IDI_EC). The EC must deliver the synthesis of evaluation findings to the main EU institutions: the European Parliament, the Council of EU, the European Economic and Social Committee and the Committee of the Regions.

Discussing evaluation results. A debate on evaluation results must be included in the agenda of Program Monitoring Committee meetings. Based on the presented findings, the EC may raise questions, asking for clarification or - where relevant - for identification of follow-up actions that should be adopted regarding the conclusions drawn. Generally, it should be noted that the Program Monitoring Committee for 2014-2020 is more involved in evaluation issues, as it has to approve the Evaluation Plans, discuss evaluations results as well as plan and control the follow-up process.

Making all evaluations results available to the public. All evaluations must be made public, preferably on Internet webpages, including interactive electronic maps with information about projects and beneficiaries. ”Every evaluation has to be published, which might make a quality control instrument of it. Now, when evaluators know that their reports are going to be published, maybe they will care more about the quality of work they deliver.” (IDI_EC)

In addition, in the 2014-2020 implementation period, the EC has adopted a number of measures and initiated a series of practices aimed at increasing the efficiency of the CP

14

Page 15:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

evaluation system by raising the quality of evaluation studies and the utility of their findings. In this respect it is necessary to highlight:

The Evaluation Helpdesk. EC declares that MS are provided with guidance (e.g. Better Regulation Toolbox), expert support, on-line tutorials and various other learning events (for instance Evaluation summer schools), where public officers are given instruction on best practices in the evaluation field. For example, a two-day extensive course on drafting quality terms of reference was organized last summer. Great emphasis is placed on methodological support for conducting impact evaluations: “Not all member states were familiar with impact evaluation practice so the Commission decided to provide as much help as possible” (IDI_EC). This kind of support may take the form of “one to one” help – in the case of problems, challenges or other issues which emerge when implementing an Evaluation Plan, the EC provides an expert on the spot who guides the MS and assists in finding appropriate solutions. Face-to-face meetings with MAs, in which evaluation issues are discussed, take place 2-3 times a year. However, MAs may at any time ask the EC for clarifications or explanations regarding legal documents, request for support in preparing terms of reference, consult on the methodology to be adopted within an evaluation study (especially impact evaluations) or seek advice in the case of any other doubts or problems that may arise. The EC often brings in external experts to provide professional help to the MS. Since 2013, the EC with the support of the Centre for Research on Impact Evaluation (CRIE) has been fostering the use of Counterfactual Impact Evaluations by MSs through numerous activities (e.g. Pilot Projects to carry out ESF related Counterfactual Impact Evaluations, VP 2013/015). The EC conducts peer reviews of selected evaluations and gives its feedback (sometimes the feedback is negative and the MA receives information that the evaluation failed to bring any reliable or relevant evidence). The goal of this practice is to raise the quality of studies commissioned by MAs.

Promotion of good practices . The EC disseminates the progress made in evaluation at different events (i.e. biannual evaluation conferences, European Week of Regions). In 2016, an evaluation conference “The result orientation: Cohesion Policy at work” was organized in Sofia. It was a great success; evaluation results on the performance of CP in different countries were presented. There was lively discussion on methods, challenges etc. With the idea of promoting evaluation culture, the first evaluation competition was launched at the conference. The purpose of the contest was to share experience and best practices as well as to motivate countries and make them feel appreciated for and satisfied with their work. There were 90 proposals for the best-completed evaluation and the best evaluation proposals submitted to the contest from 18 Member States. Quality of the methodology and practical relevance were the criteria based on which evaluations were assessed by a special committee.

Community of practitioners (Evaluation Network). Evaluation practitioners and experts (i.e. representatives from the governmental evaluation committees of the MSs) meet once or twice a year to discuss interesting evaluations and challenges, propose solutions and share good practices. This serves to disseminate what is being done in the member states. A repository for the findings of EC level evaluation gatherings is planned to be set up. For this purpose Member States are being encouraged to deliver an executive summary in English of evaluations conducted.

15

Page 16:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

The evaluation staff working document (SWD). Introduced for 2014-2020, these documents provide the overall idea and information on the results of CP implementation in different countries. SWD summarizes and presents the final results of the evaluation process. Completing the evaluation Staff Working Document should boost discussion, identify appropriate follow-up actions and feed the evaluation findings into the next cycle of decision-making.

Reflections on the CP evaluation system and main challenges

The direction of change set out by the legal framework for 2014-2020 is promising: the greater emphasis placed on impact evaluations should ultimately increase the number of studies providing knowledge to improve the effectiveness of interventions under Cohesion Policy.

Generally, in the opinion of EC, the quality of evaluation reports provided by MS has been improving over the years. In 2005, there was no evaluation capacity in the new MSs, and there was no clear concept of the evaluation function and practices. Since then, there has been considerable progress. However, the quality of reports and their findings still vary significantly, as both poor quality evaluations as well as high quality ones have been delivered recently by V4+4 countries. Smaller countries or countries with lesser experience in implementing CP still struggle with the supply side of evaluations and face limitations due to the size of their administrations. It must be stressed that the evaluation culture within the EC has also improved.

The CP evaluation system is not perceived as overregulated. The obligations laid down by the Regulation provide the basis ensuring structured and coherent actions concerning evaluations undertaken in the MSs. Given the proactive attitude of the EC, its openness to dialogue and commitment to providing necessary support, significant improvements are likely to be made in delivering quality evaluations, and their findings will have greater utility for the various stakeholders at the regional, national and EU level. However, for the CP stage that is currently being implemented it is still too early to assess the effectiveness of the modified system. Impact evaluations proposed in the Plans can only be executed after some time after the implementation of the funds. An overall picture of the effectiveness of the modified system will be revealed within a certain period of time. Thus, discussion on any further changes should be postponed until it is possible to accurately determine and assess the effectiveness of the solutions introduced for the period 2014-2020.

Multiplying compulsory tasks is not considered an appropriate strategy for strengthening CP evaluation systems. The role of the EC is seen as a supporter and promoter of good practices, rather than an institution enforcing mandatory solutions. This conviction was shared by almost all respondents, who stated that: “the EC cannot regulate too much: evaluation is about thinking, critically assessing what has been done, scrutinizing how to improve public interventions”(…) “Evaluation is supposed to spur thinking, rethinking, reflecting, so if you impose too many regulations, the managing authority stops thinking and everything starts to be done mechanically”. “Insufficient guidance or regulation is not the barrier for better evaluation use in member states”. “Standardization and regulations are more needed in the case of monitoring. Evaluation is not a field that should be overregulated.” (IDI_EC)

16

Page 17:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

There is a common belief that further improvements in evaluation practice are needed. “Each country should think of things that don’t work: what hampers the development of the evaluation system, develop the supply side of evaluators etc.”

“Evaluation must be owned and lived by MSs – there is no point in developing a culture that is based on regulations. In the case of some countries, (…) the challenge is about improving particular elements of an evaluation system which has already been set up and is assessed as a good working system.” (IDI_EC). However, the EC continues to undertake efforts on promotion, sharing best practices, providing training and clear guidance to support the development of evaluation cultures in V4+4 countries.

The effectiveness of the evaluation systems in V4+4 countries depends largely on the human factor as well as on the administrative system of a particular country. As was pointed out in the interviews, no "golden" measure, no universal institutional solution exists that can be successfully implemented in all MSs. “The fact that Poland, for example, has developed an effective evaluation system - mainly thanks to the very strong role of the central evaluation unit, which provides additional instruments to evaluation in the regions - does not mean that it is replicable elsewhere. (…) And it is not a matter of the size of a country. Just to give an example, look at the Baltic countries: their work and commitment in evaluation may be perceived outstanding to some extent.” (IDI_EC). However, MSs may reflect on the organization of the evaluation system they adopt. For example, monitoring and evaluation go hand-in-hand and are complementary, hence organizational solutions ensuring the most effective cooperation between units involved in monitoring and evolution should be introduced. The introduction of evaluation units in MAs may be considered good practice. MSs could also - on their own initiative - further foster the quality of evaluations delivered by setting up an equivalent of the EC Regulatory Scrutiny Board.

One of the greater challenges for the better use of evaluation findings in evidence-based policy is the attitude of politicians and public servants (mostly senior civil servants) in MSs: “(…) if they stick to the idea that evaluation is just an EU obligation, then the evaluation reports will be of a poor quality or they won’t even be using the results of decent studies”. The role of politicians and top officers in stimulating activities focused on knowledge gathering remains crucial for the further development of evaluation culture in V4+4 countries. However, even in a favorable political and institutional context, the development of an evaluation culture takes time: “You have to produce a lot of even poorer quality evaluations at the beginning, to be able to build your potential to produce high quality evaluations” (IDI_EC).

An issue disfavoring the effectiveness of the CP evaluation system is linked to policy life cycle timing, as the evidence from ex post evaluations is available at a time when it cannot be used in discussion on the shape of the subsequent programming period. This causes a great loss of valuable information which could feed and enrich the discussion. A regulatory solution in this matter might be to formalize the use of evaluation results from the previous perspective in the preparation of ex ante evaluations for the next period (i.e. results of 2000-06 ex posts feed the ex-ante evaluation for 2014-2020). This has already been practiced in an informal way and is the reason why evaluation findings constitute one of the sources of information and evidence gathering. Other analyses, including evaluations commissioned by the EC, monitoring data and the Open Data Platform, among others, also

17

Page 18:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

provide valuable sources of information about CP effects. It might be worth conducting small-scale activities (such as pilot interventions) that could be evaluated in a short time period..

1.3 THE SCALE AND ORGANIZATION OF SYSTEMS

The scale of CP transfers as well as the system adopted for implementing them vary among V4+4 countries. In 2014-2020, Poland remains the biggest beneficiary of CP funds with an allocation of 77.6 bn euros, whilst Slovenia implements only one OP co-financed with 3.3 bn euros from the EU budget. The Czech Republic, Romania and Hungary are the most homogeneous group in terms of the amount of funds allocated for the implementation of OPs within the CP.

Table 1 An overview of CP implementation systemsCountry Total EU

allocation for CP 2014-

20202

(bn EUR)

Number of Operational Programs3

Number of evaluation units in the

system4

Bulgaria 7.57 7 7Croatia 8.61 2 5Czech Republic 22 8 12Hungary 21.9 7 1Poland 77,6 22* 34Romania 22.4 6 3Slovakia 14 7 15Slovenia 3.3 1 1* European Territorial Cooperation OP not included

Source: Own elaboration based on information included in Partnership Agreements, Evaluation Plans and information provided by Central Evaluation Units

As well as the significant differences in allocated quotas, there is also considerable variation in the implementation systems adopted under the Cohesion Policy in V4 + 4 countries. This includes evaluation systems, which differ remarkably in terms of the degree of centralization as well as in the number and potential of the units involved in the evaluation process. However, neither the size of the total allocation of CP funds nor the number of OPs implemented can be accepted as variables explaining the degree of centralization of the evaluation system, as countries with similar levels of EU funding or a similar number of OPs (i.e. the Czech Republic, Hungary and Romania, or Slovenia and Croatia) have adopted quite different institutional arrangements within their CP evaluation system.

2 This includes funds allocated within ERDF, ESF and CF (including resources for the Youth Employment Initiative and territorial cooperation).3 Exluding programs co-financed by EAFRD and by EMFF.4 Determined on the basis of information provided by national NODs for the purpose of conducting CAWI. By “evaluation unit” we mean not only an institutionally separated entity but also a person(s) who deals with evaluation within a CP implementation system.

18

Page 19:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Figure 2 Centralization of evaluation systems in V4+4 countries with regard for the number of OPs implemented

SLOVENIA SLOVAKIA

ROMANIA

POLAND

HUNGARYCROATIA

CZECH REPUBLIC

BULGARIA

CENTRALIZED DECENTRALIZED

A centralized evaluation system is a system in which the evaluation unit or units is/are located solely in a central institution responsible for coordinating implementation of the CP funds. This system was adopted in Hungary, Slovenia and Romania. It should be emphasized that while in Romania centralization is a new solution compared to the solutions used in the period 2007-2013, Hungary and Slovenia have been implementing the strategy of a strong central evaluation unit for more than a decade.

In Hungary, the central evaluation unit is organized by the Prime Minister's Office and has exclusive competences relating to the evaluation all 7 OPs financed by ESF, ERDF, and CF. Unlike the 2007-2013 period, when Hungarian MAs financed evaluations from their Technical Assistance budget, in the current perspective it is the central evaluation unit who manage these the funds. The central evaluation unit is responsible both for the development of the evaluation plan and for the subsequent commissioning of evaluation studies. The unit provides evaluation services for MAs: the evaluation plan is developed in cooperation with MAs, so its content takes into account the needs which the MAs bring forward. In MAs, the functions regarding evaluation are assigned to departments primarily responsible for monitoring OPs. These units may also carry out additional research, external expertise and analysis at their own expense (using national funds), as they do not have a dedicated budget for analyses within Technical Assistance.

An interesting case is Romania, which - with the launch of the 2014-2020 perspective – has decided to centralize the CP evaluation system. The seven evaluation units, operating in MAs in the 2007-2013 period and coordinated by a central unit located in the Ministry of European Funds, were merged for 2014-2020 into a single structure: the Program Evaluation Unit. This change was a part of the broader shifts in CP implementation in Romania, by which most of the authorities were merged into a single Ministry of Regional Development, Public Administration and European Funds. This central evaluation unit is divided into three sub-units: (1) the Evaluation Unit for Regional OP; (2) the Evaluation Unit for Administrative Capacity Development OP and (3) the Program Evaluation Unit (formerly the Evaluation

19

Page 20:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Central Unit), which deals with evaluations of remaining OPs implemented under CP in Romania.

In Slovenia, the centralization of the system is determined by the scale of available CP funds, which co-finance only one operational program – the Implementation of EU Cohesion Policy OP. Hence, in the institutional structure of the Government Office for Development and European Cohesion Policy, evaluation related tasks are performed by just two persons.

A decentralized evaluation system is a system, in which the evaluation unit or units is/are located in MAs or – in the case of an extensive OP implementation system– in intermediate bodies. In the decentralized system, an institution which plays a coordinating role for evaluation units may operate. However, the scope of coordination activities undertaken may differ in particular systems. The decentralized system may also operate without the help of a coordination institution – in such cases evaluation units carry out their responsibilities themselves and in direct contact with the European Commission.

This kind of system has been adopted in Poland, which is consistent with the decentralized CP implementation system and the administrative structure of the country in general. It is worth pointing out that, in addition to the evaluation units located within MAs (both national and regional), these units also operate within intermediary bodies. The total number of evaluation units in the CP implementation system amounts to 34.

Bulgaria, the Czech Republic and Slovakia - countries with significantly lower levels of funding under CP - also operate decentralized CP evaluation systems. In these countries, individual MAs have adopted different arrangements for the organization of work and the location of evaluation units in their structure. In some cases evaluation units are separate units which deal solely with evaluation related issues (e.g. within OP implementation in the Czech Republic) but in most cases evaluation units are not separated entities devoted solely to the evaluation of OPs – instead, evaluation related work is carried out by staff engaged in performing other tasks.

The decentralized system was also adopted in Croatia, although as yet, no coordinating entity for evaluation units located in MAs has been established so far. It should be stressed that, due to the fact that the country joined the EU in 2013, the evaluation system (as well as the whole implementation system) is still in the process of formation, and the unstable political context (three changes of government in four years) is not conducive in this matter. However, the introduction of a central evaluation unit, whose role would be to coordinate evaluation units within MAs is planned in the forthcoming months.

The role of decentralized evaluation systems varies from country to country, as it is assigned and executed by their central evaluation units. The differences relate mostly to the extent of engagement in activities undertaken at the level of the evaluation units operating within MAs. These functions may be limited to providing information and ensuring conformity to the EC requirement, including timetables or deadlines, leaving the MAs free to decide on the evaluation, dissemination of results and organization of the follow-up processes etc. (in the case of Bulgaria). The central evaluation unit in Poland, the Czech Republic and Slovakia play significant roles in supporting activities run by evaluating units placed in MAs (by organizing working group meetings, evaluation conferences, providing training, knowledge sharing, assisting in the elaboration of TORs, public tenders and evaluation plans, or with the organization of evaluation studies). The central evaluation unit draws up standards and recommendations for national evaluation documents, and provides a range of tools, which oblige MAs to follow specific requirements for individual institutions in the system (in the case of Poland). In the Czech Republic and Slovakia, NODs are regulators in minor ways, however they give support to evaluators placed in MAs, treating them as "clients" (IDI

20

Page 21:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

CZE_UNIT). Among all V4 + 4 countries, which have adopted decentralized evaluation systems, the Polish central evaluation unit performs the broadest scope of duties, as it was given the competences of a coordinating center for the evaluation of all national development policies.

Table 2 Types of evaluation systems adopted in V4+4 countries

MODEL OF EVALAUTION SYSTEM

CENTRAL EVALUATION UNIT ACTIVITIES

COUNTRIES WHERE THE SOULTION HAS BEEN ADPOTED

MODEL 1:

DECENTRALIZED WITH STRONG ROLE OF CENTRAL COORDINATION UNIT

Active coordination of the performance of evaluation units placed in MAs’

Conducting own evaluations

Undertaking a broad range of activities strengthening evaluation culture (publications, conferences, organizing working groups, spreading results of evaluations etc.)

Czech Republic

Poland

Slovakia

MODEL 2:

DECENTRALIZED WITH CONTROLLING AND INFORMATIVE FUNCTION OF CENTRAL EVALUATION UNIT

Coordination of the work of evaluation units placed in MAs

Controlling timetables and fulfillment of EC requirements

Bulgaria

MODEL 3:

DECENTRALIZED, LACK OF CENTRAL EVALUATION UNIT

- Croatia

MODEL 4:

CENTRALIZED

Preparation of evaluation plan covering the knowledge needs of MAs

Performing evaluations

Hungary, Romania, Slovenia

Source: Own research

21

Page 22:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

1.4 BROKERS ROLE IN THE SYSTEM

Figure 3 Tasks executed by evaluation units in V4+4 countries

BG

CZ

HR

HU

PL

RO

SI

SK

W.AVR

AVR

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

100%

36%

100%

100%

75%

33%

100%

85%

73%

79%

64%

25%

67%

15%

27%

21%

Yes, evaluation is the only task of our unit.No, our unit also has other tasks

Source: Own research

In V4 + 4 countries, on average 1/5 of the existing evaluation units deal exclusively with evaluation. Out of all evaluation units, the largest share of these, dealing exclusively with evaluation, is located in Romania and the Czech Republic. In the case of Slovenia, Hungary, Croatia and Bulgaria, the scope of duties of all evaluation units goes beyond tasks related strictly to provision of evaluations.

22

Page 23:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Figure 4 Share of working time spent on evaluation by the evaluation units in V4+4 countries

BG

CZ

HR

HU

PL

RO

SI

SK

AVR

W.AVR

0 10 20 30 40 50 60 70 80 90 100

What share of working time is spent on evaluation

MEDIANAVR

Source: Own research

The most/greatest average time spent on evaluation related issues is devoted by the staff of the evaluation units in Romania, which is understandable, since – as mentioned above – in the current implementation period a central evaluation unit was introduced, which, within its three sub-units, is responsible for evaluation of all funds deployed under the CP. The remaining working time of the unit is devoted to monitoring and other types of analytical activity.

In Hungary, a country which adopted a similar evaluation system, the estimated time devoted to evaluation related issues amounts to 70%. Other responsibilities of the Hungarian evaluation unit involve conducting different types of analytical work (i.e. expert studies, reviews, regulatory impact assessment), programming, information and communication.

Respondents in Bulgaria declared that a relatively small part of their time is devoted to evaluation. The Bulgarian evaluation units located in MAs are mostly part of units that are responsible for programming, monitoring as well as information and communication. They are also engaged in collecting and analyzing data from the monitoring system as well as from other sources. For example, an evaluation unit in the MA responsible for implementation of the Good Governance OP does not commission many evaluations and gathers knowledge from different sources, among which a system called the Integrated Information System of the civil service should be mentioned. The data stored in the system is collected by a standardized questionnaire (on civil servants) which is filled in each year by different institutions. The Bulgarian Central Evaluation Unit provides data on CP impact with the use of the SEBILA econometric model.

23

Page 24:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Figure 5 Other tasks executed by the evaluation units in V4+4 countries

other analytical work

Monitoring

Programming

Implementation

Info&com

Other

0% 20% 40% 60% 80% 100%

40%

42%

54%

12%

37%

24%

Source: Own research

In the Czech Republic, Hungary, Romania and Slovenia, the Program evaluation and implementation functions have been clearly separated, so the evaluation units are not engaged in activities such as project selection, controls, payments and certification. Separation of the implementation system from evaluation may, in the opinions of some respondents, cause problems of "remoteness" of MAs – which are the main recipients of the evaluation results as well as the institutions which are most familiar with OP content. Information needs may not be raised and communicated to the central unit for fear of being perceived as ineffective. Hence, there is a need to ensure close cooperation between the managing institutions of the OPs and those that deal with their evaluation (IDI_HUN_unit).

In countries under the study, evaluation staff are involved to a lesser extent (in the case of the Czech Republic and Poland) or to a greater extent (in the case of Bulgaria, Croatia, Slovakia and Hungary) in the programming process, that is, setting strategy, defining indicators, etc. The programming and evaluation functions are clearly separated only in Romania. The Romanian Program Evaluation Unit, apart from executing all functions related to evaluation, is focused on conducting a variety of analyses, partially dealing with program monitoring.

The central evaluation unit in Hungary, in addition to evaluation to which it devotes more than 70% of its time, is also responsible for the implementation of the OPs, as well as performing other types of analytical work (e.g. expert studies, analyses, reviews, regulatory impact assessments), and also deals with information and communication (e.g. with managing authorities for which they provide evaluation services, or with other institutions among which evaluation results are disseminated). Working time is divided between evaluation and programming tasks. The time spent on these two kinds of task depends on the stage of CP implementation (in the early stage, the staff focus on the programming and preparation of ex ante evaluations; they then move on to preparing the evaluation plan).

In the case of Slovenia, the “mix of competencies” assigned to the evaluation unit is not just about combining evaluation functions with other tasks related to OP implementation, namely programming. The department employs two people, who are also involved in preparing the National Development Strategy for the government.

24

Page 25:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

In most countries, the scope of duties to which evaluation units devote their time includes tasks focused on the provision of other types of analytical work (e.g. expert studies, analyses, reviews, regulatory impact assessments) and (except for Hungary) monitoring-related issues.

When interpreting the results presented in the study, it should be taken into account that the research covers a period from 2015 to the present. In this period, evaluation activities were mainly related to the preparation of evaluation plans. In addition, in the current perspective, the EC has put significant emphasis on the development of impact evaluations, but these can only be performed after some time has lapsed since the implementation of the OPs began.

25

Page 26:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

2 DETAILED ACTIVITIES OF KNOWLEDGE BROKERS

Knowledge brokering within the system of Cohesion Policy evaluation requires several processes to take place. The main three are sequential and deeply engrained in the policy cycle of a particular policy. The three others are continuous processes of developing an evidence based culture which is capable of producing timely, credible evaluation with clear recommendations to support policy decision making (Olejniczak, K., Raimondo, E., & Kupiec, T., 2016).

Figure 6 Knowledge brokering processes

Source: Own elaboration based on Olejniczak, K., Raimondo, E., & Kupiec, T., 2016

The next chapter analyzes how each of these processes is developed in the given countries.

2.1 IDENTIFYING THE KNOWLEDGE NEEDS OF USERS

The key skills for knowledge brokers at this stage is identifying the knowledge needs of users and critically assessing them in terms of data availability, the credibility of sources and the priority of the knowledge need. The units usually create smaller or larger networks of users within the CP system – MAs and intermediate bodies which meet in a Steering Group. Brokers send evaluation plans or monitor their appropriateness for the implementation process and express a need for understanding the impact. Generally, the units’ role is then to translate the ideas, intuition and common questions into research questions that are sufficiently clear and possible to answer with the information available.

(1) identifying knowledge

needs

accumulating knowlede over

time

(2) acquiring knowledge

building networks with producers and

users

(3) feeding knowledge to

users (dissemination)

promoting an evidence-based

culture

26

Page 27:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Figure 7 Methods for obtaining knowledge needs by evaluation units

Wait for other units in our institution to express needs

Participate in various meetings in our institution

Follow ongoing scientific and expert debates

Observe the programme implementation process

Cooperate with other units in the process of preparing the final study

Translate proposals submitted by other units into a language of research contractors

0% 20% 40% 60% 80% 100%

4%

7%

15%

12%

3%

5%

36%

5%

39%

19%

16%

27%

15%

19%

28%

27%

14%

19%

38%

20%

28%

30%

23%

31%

36%

18%

32%

35%

42%

32%

19%

43%

20%

23%

19%

28%

30%

27%

22%

23%

28%

32%

7%

24%

8%

22%

26%

8%

15%

30%

3%

3%

14%

11%

0%

7%

4%

7%

16%

5%

5 - Very Often 4 - Often 3 - Sometimes 2 - Rarely 1 - NeverSource: Own research

CAWI has shown that the most common means of assessing the information needs is simply following the implementation process (74% unit employees gave the response that they often or very often use this source). To a large extent MAs share their needs on the basis of the monitoring data. The second most popular way to detect needs is by cooperation with other units in the process of preparation for the final tender – 67% noted they use this sort of cooperation often or very often. 58% of units also often assess and prioritize the information needs and 28% admit they do it sometimes. The least common strategy is to formulate written inquiries (more than half of units never or rarely use such formal means), but they also rarely remain passive and expect others to express their needs first (45% of respondents never or rarely do so).

The majority of the interviewees responded that they rely heavily on the evaluation plan in terms of identifying the information needs which evaluation can satisfy. That is because the process of shaping the final plan is often time consuming, extensive in consultation with participants and rich in negotiations. In some other cases, the evaluation plan is prepared by an external evaluator as a product of ex-ante evaluation and the number of evaluation studies in the plan reflects the number of specific objectives in the Operational Programmes. This sort of plan also needs to be discussed in Working Groups and accepted by Steering Groups. Some of the plans are again reshaped due to delays in implementation, making some planned studies simply outdated.

27

Page 28:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

The units indicate that any suggestions for changes and ideas for ad-hoc evaluations should come firstly from Managing Authorities. Despite the fact that structurally units are outside of MAs in 2014-2020 period, they still perceive themselves are primarily serving their needs. Much of the demand for evaluative information comes from monitoring data that Managing Authorities are responsible for.

While there are some rare examples of units with long-serving staff that have enough experience to predict what sort of information might be needed in upcoming stages of policy cycle (Poland), some younger ones admit that they still have problems with being explicit about what sort of evidence evaluation studies should provide.

Middle management’s understanding of the timing of an evaluation study so as to plan its contribution at a beneficial stage of policy development is crucial to its appropriate use. It happens that some evaluations, even at ex-post level, come much too late for their recommendations to be used.

Besides implementing and controlling evaluation plans, the units tend to be asked for short and rapid answers to burning problems of decision makers or other departments, which they often cannot respond to appropriately due to limited resources of staff and time (occasionally due to a lack of appropriate data). In such cases they try to give some sort of an answer, but the potential for performing this role needs to be strengthened.

Our findings provide the following implications for evaluation practice: The current evaluation units need to work on educating their potential users - departmental and ministerial decision makers - on how evaluation can support their work. The examples identified in this research prove that in cases where decision makers were involved in the evaluation process and understood how can they could use the results, the evaluative knowledge was widely disseminated, translated into understandable and convincing language to other high level staff and therefore contributed to evidence based decisions.

Thus, the personal involvement of decision makers seems to be crucial in conveying the evaluative message. Evaluation units can help ensure that quality evaluations are prepared providing reliable evidence, but they cannot show users how to benefit from it.

2.2 ACQUIRING KNOWLEDGE

The tasks of knowledge brokers at this stage involve assessing available knowledge sources and the spectrum of knowledge gaps, choosing credible producers and assessing the quality of their work. The stage of quality assurance is crucial because only credible evidence is worth conveying to decision makers, who then decide on its usability. To a smaller extent, acquiring knowledge might also mean that knowledge brokers directly answer more rapid requests from the management themselves.

Figure 8 Methods for acquiring knowledge by evaluation units

28

Page 29:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

contract out studies

conduct studies ourselves

conduct systematic reviews

conduct rapid reviews

process and interpret monitoring data

formulate short inquiries to experts

process and analyse public statistics

verify information needs vs. available knowledge

involve users in the research process

hire external experts to review the quality of studies

have internal procedures to verify the quality of studies

0% 20% 40% 60% 80% 100%

34%

7%

4%

7%

36%

8%

12%

11%

26%

5%

31%

34%

11%

18%

30%

31%

12%

31%

38%

32%

11%

23%

19%

35%

28%

39%

23%

28%

23%

35%

22%

19%

9%

11%

34%

35%

15%

4%

28%

27%

9%

14%

19%

16%

3%

14%

15%

9%

5%

23%

7%

7%

7%

46%

20%

5 - Very Often 4 - Often 3 - Sometimes 2 - Rarely 1 - NeverSource: Own research

Since, to a large extent, the current evaluation system primarily feeds the implementation process, units heavily depend on contracted process-oriented studies (68%) and monitoring data (67% do so often or very often) and over half of units fairly often verify the quality of the studies (54%).

In order to widen the horizon of evaluation influence the demand side often or very often involves users in the process of evaluation that are primarily members of Steering Groups or representatives of MAs and other groups of stakeholders (58%).

Subsequently, the availability of current knowledge is assessed (49%) and public data and national statistics are scanned (43%) to find rapid answers to the most pressing questions that cannot suffer delays or wait for the entire evaluation process to pass.

Internal evaluations are conducted very rarely (82% of units responded that they never, rarely or only occasionally conduct such research). Moreover, for these few internal evaluations, external evaluators are hardly ever employed (46% never do so, and another 36% do so rarely or only occasionally). Equally rarely do units conduct rapid reviews (79% do so rarely). Systematic evaluations and meta-analyzes also appear less frequently (78% responded never, rarely or occasionally) than the need for operational knowledge accumulation.

The interviews we conducted show that only a small percentage of evaluation knowledge is produced internally. Examples are scarce for both programming periods. The only exception is in Czechia where 50% of research is produced in-house by the National Evaluation Unit, but this is a new phenomenon under recent management. This is due to the fact that the number of staff and other resources has significantly increased in the last

29

Page 30:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

few years and the unit could devote more time to independently seeking evidence, not only contracting studies out.

External evaluation contracts have many limitations. Recently, in many countries, procurement law has undergone major reorganization in accordance with European law, in some countries completely revolutionizing the legal system surrounding evaluation (Romania). Due to this fact, many planned studies have suffered significant delays and updates of evaluation plans must be sent to European Commission. The delays in some cases have completely stalled the evaluation plan implementation so no evaluation studies have yet been conducted in this programming period (e.g. Croatia).

The numbers of evaluations produced in each country vary vastly – starting from Poland where hundreds of evaluations are produced annually, through Czechia where 135 studies were produced in the last programming period to Slovenia where only 15 studies were conducted in the same time span. It is natural that the demand level shapes the supply. In the Balkans, where the markets are particularly unstable, major international companies have moved in and formed collaborations for the purpose of winning tenders for particular studies, pushing out local service providers that cannot compete in terms of experience requirements. Some of the markets consist almost only of the international giants employing local experts.

Nevertheless, even these major players compete vigorously for the contract when the opportunity appears only 1-3 times a year. This can cause many months delay before reaching agreement among all participants (e.g. Romania). For this reason, some countries choose to enter into multiannual framework contracts instead of complicating the procedure with every new study (e.g. Slovakia). Unfortunately, such schemes provide a long-term assessment with limited perspective. Moreover, when contracts are won mainly on the basis of price (Slovakia) it can lead to substantial quality degradation by major players on the market.

In the majority of cases, the selection criteria is focused on three main areas: methodology (with additional points for supplementary methods), the experience of the team and the price of the study. Except for extreme cases such as those already mentioned in Slovakia, the selection criteria schemes are based on a rule of 50/50 or 60/40, where price is less relevant. In Croatia there are examples of tenders based on a rule of 50% methodology, 30% team and 20% price. However, in all of these cases the suppression of the price criterion has been achieved by means of a long negotiation process with procurement law experts.

To a large extent, opinions on the credibility of evidence created depend on the experience and expectations of the staff member regarding what products should be provided, his or her trust and openness towards the particular researchers and many other contextual factors. Numerous respondents express the opinion that the majority of the evaluation products in their country still do not satisfy their needs. On the other hand, some unit representatives admit that the quality of evaluation products depends on them clearly expressing their requests as well as supporting and controlling the process of building up the evidence.

The opinion about quality of studies is often independent of the formal or informal quality assessments. There are different tools such as the checklists given by Evalsed or nationally created quality criteria of, co-created with a Managing Authority or prepared by Steering Groups. In some other countries there are also checklists for methodology (inception reports) (e.g. Romania). The common opinion, however, is that the quality of studies should still be improved. The main problematic element is a weakness of the logic between evidence, conclusions and recommendations – the last being usually vaguely written and not providing enough guidance on what should be corrected within the evaluand.

30

Page 31:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Based on the collected material we can tentatively state that, although the units are well resourced to contract out external evaluations (the majority of them being process-oriented), they do not have sufficient resources (including staff time available), and therefore aspirations, to absorb, digest and translate for decision makers other types of knowledge. Often changes in the context of contracting the studies and processing tenders absorbs most of the units’ energy. We believe this could be changed by common reflection on strategic knowledge building based on the UTILE approach we recommend.

2.3 DISSEMINATING KNOWLEDGE

After assessing the credibility of evidence, the results should be disseminated and decision makers convinced to use it. For this purpose, knowledge brokers must use appropriate strategies for communicating and tailoring the message to different profiles of users.

In the majority of cases, this is done in a routine way – e.g. by publishing full reports on program websites, sending reports to closely cooperating units and executive summaries to directors, as well as asking contractors to present the findings at meetings of stakeholders (the Monitoring Committee, Steering Groups and Working Groups). Usually, although evidence successfully reaches first hand users, this is often not enough for it to be used by high-level decision makers. As a result, there is a risk that evaluation unit employees come to see less impact, and therefore value, in evaluations and become less motivated and creative at work.

31

Page 32:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Figure 9 Ways of disseminating knowledge used by evaluation units

send reports by email to intended users

publish results of our studies online

publish printed version of our reports

provide executive summaries, memos

provide posters

encourage media to publish press articles

prepare video presentations and animations

prepare infographics

share information through social media

facilitate presentations of findings by study contractors for intended users

organize conferences for a wider audience (more than 5 institutions)

use opinion leaders to disseminate conclusions from our studies

meet personally with intended users to discuss conclusions from our studies

0% 20% 40% 60% 80% 100%

55%

70%

8%

41%

1%

4%

1%

7%

1%

34%

9%

7%

19%

16%

15%

12%

24%

1%

5%

3%

12%

7%

20%

9%

12%

20%

19%

7%

19%

16%

18%

23%

9%

22%

14%

15%

18%

23%

26%

7%

4%

36%

14%

23%

20%

22%

23%

12%

19%

23%

23%

22%

3%

4%

24%

5%

57%

47%

65%

36%

66%

12%

41%

35%

14%

5 - Very Often 4 - Often 3 - Sometimes 2 - Rarely 1 - NeverSource: Own research

Naturally, the most popular method of dissemination is uploading the report on the OP website (85% of units do so very often or often) as well as sending the findings to a customary network of users in the form of full reports (71%) or summaries and memos (65%). A little more than half of respondents (54%) require a presentation of results by contractors, and even fewer (39%) meet users of evaluation to discuss the results of the research!

It is worth also noting that representatives of the units decisively responded that they don’t ever use certain tools – for example 65% responded that they never use social media to disseminate results, 65% do not ever prepare short films or animations about the results and more than half (57%) never use posters.

Knowledge is therefore mostly conveyed through the complicated language of reports and expert presentations in a narrow circle of its direct users.

32

Page 33:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

One of the most crucial factors perceived by interviewees is to engage opinion leaders (the so called “champions of evaluation”- managers in high positions making use of evidence and influencing others) in the dissemination of results. However, not even one fifth (19%) have ever managed to successfully convince such people to get involved in feeding the knowledge into the policy cycle.

Interestingly, only slightly over one third of the units (36%) organize very often, often or even sometimes conferences for a wider audience.

Figure 10 Strategies of dissemination used by evaluation units

After a study is completed we organize discussions among relevant actors about conclusions and how they could be used for public policies

We prepare the strategy of dissemination before the study starts

We use different communication tools and channels for different types of intended users of the studies

We adjust study timing to deliver knowledge right on time

0% 20% 40% 60% 80% 100%

34%

8%

9%

23%

16%

26%

31%

42%

23%

28%

24%

16%

14%

16%

18%

12%

14%

22%

18%

7%

5 - Very Often 4 - Often 3 - Sometimes 2 - Rarely 1 - Never

Source: Own research

Most respondents stated that they often or very often match the timing of outsourced research to the information needs of their users within a certain timeframe. This is good news. Only 7% of them never take care of the timing of research.

Over one third of the respondents (34%) often organize meetings where they discuss the results and their use with the institutions to which the recommendations are addressed. A total of half of the respondents organize such meetings for most studies.

It is worth pointing out, however, that more than half of the respondents never, rarely or very rarely use a variety of communication tools (60%) or advance plans of dissemination strategies (22% never do so and 16% do so rarely). There are some instances of studies for which dissemination is planned ahead, and these can be viewed as glorious exceptions.

Consequently, dissemination of results is usually treated habitually, without the creative approach of how to best deliver results to the recipients. This time-consuming approach is used only for selected research.

The interviews provided us with more insights regarding specific, effective strategies and practices employed across countries. In Bulgaria, a supervisory body was introduced

33

Page 34:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

composed of policymaking directorates who are engaged in the whole evaluation process and it is their responsibility to present the final conclusions and recommendations to their superiors – the ministers or deputy prime minister dealing with a certain policy area.

In Czechia, some evaluation units regularly tailor evaluation results to the needs of high level decision makers, compressing executive summaries into useful briefs and summaries. They have also created synthesis report for the wider public from 30-40 evaluations from the previous programming period, and for each new evaluation study an action plan aimed at implementing and following recommendations. However, a dilemma appears in in monitoring the implementation of recommendations, which is the responsibility of internal evaluators. These are often perceived as auditors due to the given task, but brings more accountability to the majority of institutional stakeholders.

Another good practice is to build up an extensive database of the email addresses of politicians, party cabinets, think tanks, opinion-leading media etc. and send them regular short briefs on the acquired results, as has been introduced in Poland. These attempts have not yet proven to bring the expected benefits, but can be perceived as a fundamental form of outreach to potential evaluation users. However, taking into account the demanding and pressing tasks of managers, as one of the respondents said “the best way to convince the high level decision maker about the value of evaluation is to invite him/her on an event out of the capital where (s)he works and require from him/her to read a prepared brief upfront”.

Romania serves as a very interesting example of the use of a professional strategy for disseminating results. One of the evaluation units prepared an impact evaluation of their projects from regional OP which gained a lot of interest among high level decision makers, media and the public. This was, however, the effect of the long-term effort of a person educated as a journalist, experienced in the communication strategies of the business sector and the Prime Minister’s Office. This helped her to understand the importance of results written in a widely understandable language, reaching leverage points such as high level managers educated abroad in more evaluation responsive cultures who understand and can translate the value of evaluation for policy shaping.

The general observation arising from this discussion is that evaluation unit employees often feel trapped in a vicious circle of wanting to become more creative and innovative in the methods they use for brokering knowledge from evaluation, but feeling constrained by limited resources, little interest from top management and occasionally the unsatisfactory quality of research all of which limit the use of evidence.

2.4 ACCUMULATING KNOWLEDGE

Systematically and accessibly accumulating studies and reviews allows knowledge to be gathered and put in a historical perspective, helps to strengthen the evidence, compare approaches and further develop methodologies improving overall quality. Without this systematic approach, knowledge becomes scattered and incomparable, efforts start from a similar point every time and significant amounts of energy and potential for knowledge brokering is wasted. Therefore it is important to ensure that studies have an appropriate level of comparability.

Figure 11 Whether the structure of the studies allow future comparisons and syntheses

34

Page 35:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

BG

CZ

HR

HU

PL

RO

SI

SK

AVR

W.AVR

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

20%

18%

0%

0%

17%

67%

0%

8%

16%

16%

20%

36%

50%

100%

28%

33%

0%

31%

37%

31%

20%

27%

25%

0%

39%

0%

100%

15%

28%

30%

20%

9%

0%

0%

17%

0%

0%

31%

10%

16%

20%

9%

25%

0%

0%

0%

0%

15%

9%

7%

5 - Very Often4 - Often3 - Sometimes2 - Rarely1 - Never

Source: Own research

The answers of CAWI respondents in this respect varied vastly. In Romania, 2 out of 3 units ensure comparability very often and in Hungary all units ensure it very often. In Slovakia this happens only sometimes. In Bulgaria, comparability depends on the individual approach of each unit, while in Croatia 2 out of 4 units put effort into ensuring future comparisons often. In Czechia, over half of the units pay attention to comparability, while in Slovakia the studies have the appropriate structure occasionally and in Poland, more than half admit that it happens rarely.

Figure 12 Units organize discussions to allow sharing knowledge and experiences among employees of the institution (wider than unit)

BG

CZ

HR

HU

PL

RO

SI

SK

AVR

W.AVR

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

0%

0%

0%

0%

14%

67%

0%

15%

12%

12%

20%

36%

0%

100%

17%

33%

0%

8%

27%

19%

0%

55%

50%

0%

39%

0%

0%

8%

19%

31%

40%

0%

25%

0%

19%

0%

0%

62%

18%

24%

40%

9%

25%

0%

11%

0%

100%

8%

24%

14%

5 - Very Often4 - Often3 - Sometimes2 - Rarely1 - Never

Source: Own research

The accumulation of knowledge can also result from sharing knowledge and experience with closest co-workers and directors of departments. However, this practice happens rarely. While units in Romania fairly commonly meet up with other colleagues, the Slovenian unit admits that it does not use this tool (which may be due to the large variety of responsibilities they share, not only related to evaluation). Similarly, in Croatia such meetings are perceived

35

Page 36:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

as being needed only sporadically and in Bulgaria only 1 in 5 units organizes such meetings. In Poland, too, less than half of the units (31%) use this proactive approach in a systematic way.

This international comparison should, however, take into account the large diversity of structures in which evaluation units operate and the fact that these sort of meetings might be more useful in some contexts than others.

Figure 13 Collecting and making available a unit’s work in the form of a repository / database / online platform

BG

CZ

HR

HU

PL

RO

SI

SK

AVR

W.AVR

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

40%

73%

25%

0%

69%

100%

100%

62%

59%

65%

0%

9%

25%

0%

8%

0%

0%

0%

5%

7%

20%

18%

50%

0%

17%

0%

0%

8%

14%

16%

40%

0%

0%

100%

6%

0%

0%

31%

22%

12%

Yes, this repository is available for general public.Yes, this repository is available only for employees of institu-tions implementing Cohesion policy.Yes, this repository is available only for employees of our insti-tution.No, we do not collect results of our studies in a repository.

Yes, a repository is available to the general public.

Yes, a repository is available only to employees of institutions implementing Cohesion Policy.

Yes, a repository is available only to employees of our institution.

No, we do not collect the results of our studies in a repository.

Source: Own research

One of the obvious ways to share the knowledge, including products of evaluation is through a repository or database studies. As shown above, the majority of countries possess such tools – all of the units in Romania and Slovenia confirmed that they use a database, while in Czechia, Slovakia and Poland, the majority of studies are stored online. In Croatia, half of the units publish the reports online sometimes (the other half of units in this country do so more often). Hungary does not publish reports online at all.

The interviews with unit employees show that the best working Evaluation Libraries (online searchable databases with appropriate filtering options) exist in Czechia and Poland. However, cost effectiveness must be taken into account, since in many other countries the evaluation studies conducted since the 2010s form a body of only 20-30 reports.

36

Page 37:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

It is important for knowledge brokers to reflect on possible ways to feed the results of research into the learning systems of institutions, on how to open up forums for knowledge exchange (e.g. by regular meetings on the institutional level, social media management programs) and how to accumulate knowledge in such a way that would make it responsive to critical questions at different moments (e.g. by creating knowledge clinics - Olejniczak, K., Raimondo, E., & Kupiec, T., 2016).

2.5 NETWORKING AND BUILDING AN EVIDENCE CULTURE

Figure 14 Capacity building activities

We participate in relevant conferences

We participate in the meetings of national thematic groups

We participate in meetings of European networks

We are involved in relevant thematic societies

We exchange experience with analytical units in other institutions

We cooperate with experts, academics (in other way then contracting out studies)

We organized activities (e.g. training, conference, publication) for employees of our institution to raise awareness about necessity of supporting decisions with knowledge (including evaluation) in public management

We organized at least one activity (e.g. training, conference, publication) for other institutions and the general public to raise awareness about the necessity of supporting decisions with knowledge (including evaluation) in public management

0% 20% 40% 60% 80% 100%

32%

28%

14%

8%

19%

8%

7%

9%

36%

41%

15%

15%

26%

16%

20%

9%

26%

27%

20%

28%

42%

32%

22%

18%

5%

3%

22%

34%

12%

27%

31%

20%

0%

1%

30%

15%

1%

16%

20%

43%

5 - Very Often 4 - Often 3 - Sometimes 2 - Rarely 1 - Never

Source: Own research

We cooperate with experts, academics (in other ways than contracting out studies)

The main form of networking is participation in relevant national thematic groups and exchange of experiences with other institutions (69% of respondents said they do so often and very often). The second popular means of networking is participation in conferences and the unit also typically funds trips abroad to participate in conferences or present the results of studies (altogether 68% of respondents travel to conferences often or very often).

Meanwhile, the most seldom means of networking is to create own events for others (63% never do so or do so rarely). It is also very rare for units to collaborate with researchers and academics in other ways than through contracts (75% never do so or do so rarely and sometimes) or participate in relevant evaluation societies (77% never do so or do so rarely and sometimes).

When assessing capacity building activities it is very important to take into account the current state, structure and size of evaluation units. Moreover, they often have a relatively big turnover of staff, and finding new employees who have basic knowledge in evaluation is

37

Page 38:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

problematic in many countries. Employees perceive the capacity building activities as very useful for benchmarking their performance, learning new ideas, and finding support in people working in the same field. However, a relevant additional factor is the approach of particular heads of units and their supervisors to building the knowledge of employees and their network of contacts - in many cases this is perceived as a supplementary activity which they should find time for outside of working hours. In one country, the approach of the NOD unit was very clear: “we are not really interested in developing the evaluation system - as I said, there is no use for it.” Many of the respondents have confirmed that resources are available for joining summer schools, conferences or seminars both locally and internationally, although at times they are blocked by bureaucracy or program delays (in the case of initiatives tendered by public procurement). Moreover, the size of the demand side, and therefore the stability of supply, matters when talking about events like conferences or local publications.

Although the numbers of participants, especially from the supply side have decreased, the biggest conference in the region has already been organized 12 times in Poland. For a long time, this conference has been the main regional networking event in the field of Cohesion Policy evaluation in CEE. Other countries actively organizing annual regional conferences are Czechia (where Slovakian administrators also very often actively participate) and Hungary.

In Croatia, training is organized on demand (similarly to Slovenia) and usually takes place once a year. However, several years ago, a 6 months training course for strengthening evaluation capacities was organized from TA OP (by an external company) and it is still perceived as having been a very helpful educational support for the unit’s employees.

In Hungary, the Public Policy Academy is organized 1-3 times a year to deal with a variety of subjects. The aim is to educate a wider circle of administrators of regional, sectoral and national levels about sectoral policies, but also about the policy cycle and the importance of evaluation knowledge feeds. In addition, in order to spread evaluation awareness among parliamentarians, the creation of technical parliamentarian groups was proposed, but it has not yet gained enough of interest. Hungarian evaluation units are also challenging themselves by trying to present the results of every evaluation study at international conferences.

In Poland, the Evaluation Academy with internationally renowned practitioners was one of the first training courses of its kind for public administration employees from different levels of management. Over the years the quality of studies has gained regional recognition. In addition, the annual or biannual International Program for Development Evaluation Training takes place in Slovakia with great interest and with no recruitment problems.

In Romania, a master’s degree in evaluation has existed for several years now5. Recently, the capacity building activities have been built up systematically over the years and even though the new programming period has brought lots of structural and legal changes in evaluation systems, the systematic growth of knowledge within Romanian evaluation unit is perceived positively: In our first training session, nobody knew anything about counterfactual methodology, it was like Chinese or Russian or I don't know which exotic language. In the third training session, everybody had the same language and the same interests; I remember my colleagues wrote poems about counterfactuals.

Although in many of the Balkan countries there are a few bottom-up developed evaluation societies, in some cases the administration has developed their own top-down networks. The

5 Similarly, in Slovenia, there is a master course which includes an evaluation theme, but the quality of the studies are not highly valued in the evaluation community

38

Page 39:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

same goes for Slovenia, where in addition to good cooperation with the internationally recognized Slovenian Evaluation Society, the NOD started up an Interdisciplinary consultative group on evaluation which was nominated by the monitoring committee and now meets twice a year, bringing a whole spectrum of evaluation system stakeholders together. Similarly, in Romania, the NOD plans to construct their own evaluation network with TA OP funding despite the fact that there are already three other evaluation societies in the country with whom they have cooperated in the past. This initiative, despite creating a network among stakeholders, can easily exclude other players and create more distrust.

A good practice therefore seems to have been achieved by evaluation societies in the Balkan countries which formed the informal Western Balkan Evaluation Network aiming at strengthening cooperation by regular meetings, publications and organization of the regional conference, now in its second year.

When it comes to the demand side, researchers still mostly discontinuously attend fragmented training sessions “on demand”, and evaluators incorporate knowledge from available literature, network with peers at common events or simply use the strategy of learning-by-doing with mixed results. Often, the problem is where to study and educate yourself in order to obtain a meaningful diploma.

This results in evaluation leader positions (in accordance with tender requirements) being primarily occupied not by evaluators, but by non-evaluators - project managers, researchers at universities and so on, who regularly lead projects other than evaluations but who have the required achievements. Therefore the discussion has been raised about the value of regional certificates that could be issued by relevant bodies and used in tenders as an equivalent to a chosen number of years’ experience required.

39

Page 40:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

3 PRODUCTS, USERS AND IMPACT

The knowledge provided by evaluation units can be divided into several types. In this study we propose a three part classification of knowledge:

about processes - information on the quality of implementation procedures, activities and ongoing processes, problems in processes and ways of solving them;

about effects - explanation of what the effects of a program / intervention are, what approaches and solutions worked and produced the planned outcomes;

about mechanisms - explanation of why things have worked (or not), how beneficiaries responded to the program and what factors caused the observable outcomes as well as side effects.

Declarations about knowledge types provided by evaluation units to their users from the CAWI_EU are quite surprising. Over 2/3 of respondents from evaluation units claim to provide each type of knowledge often or very often6. 60% of respondents declare "often" or "very often" for each of the three types of knowledge. Meanwhile, users of evaluation knowledge are slightly more critical. They claim to receive each type of knowledge less often than evaluation units claim to provide them (respectively by 10 percentage points in the case of process knowledge, 13 p.p. effects, and 22 p.p. mechanisms). In the case of users there is a more visible difference between types of knowledge – process knowledge is received much more often than knowledge about mechanisms. Yet these numbers are still high, higher than expected.

Figure 15 Knowledge types provided according to evaluation units and their users

e.units

users

e.units

users

e.units

users

pro

cess

esef

fect

mec

han

ism

s

0% 20% 40% 60% 80% 100%

35%

28%

28%

22%

27%

17%

36%

33%

42%

35%

41%

28%

16%

20%

14%

27%

20%

30%

11%

14%

11%

12%

8%

18%

1%

4%

5%

4%

4%

7%

5 - Very Often 4 - Often 3 - Sometimes 2 - Rarely 1 - Never

Source: Own research

The results of both CAWI surveys go against our intuition as well as previous research on evaluation activity, which suggests that evaluation systems focus mostly on the production of operational studies providing process knowledge (Olejniczak, Strzęboszewski, Bienias, 2012; Kupiec 2014). They also only partially resonate with the stories heard during IDI. Asked whether they provide more strategic or process studies, some respondents replied a little bit

6 The differences in sums of “often” and “very often” between 3 types of knowledge are below 3% margin error.

40

Page 41:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

from all of them or the mix between the two is roughly half/half. Other voices were even less in favor of evaluation on effects and mechanisms saying I feel maybe 3 quarters are mostly on process and one quarter are dedicated to results or most of it will be process evaluation or I’m afraid only we, as the Central Coordination Body, make impact evaluations.

We believe that this discrepancy results partly from the methodology – it is more likely to understand each other and get closer to reality during an individual interview. Even more importantly, while CAWI_EU was directed at all evaluation units, IDIs were conducted with representatives of coordinating units and only leading evaluation units. It is highly probable that they possess a more general and critical approach to evaluation practice in their country as well as being more aware of the challenges and deficiencies of evaluation systems.

Figure 16 Knowledge needed and received by users from evaluation (CAWI_U)

0%25%50%75%100%

38%

41%

40%

32%

43%

44%

I need knowledge about

Processes

Effects

Mechanisms

0% 25% 50% 75% 100%

27%

29%

23%

23%

18%

14%

I receive knowledge about

agree strongly agree

Source: Own research

The dissonance between what evaluation units claim they provide and what users think they receive is accompanied by a gap between what users think they receive and what they actually need. The charts above allow for two observations. First, in general, users agree they need knowledge much more often than they agree they actually receive it. Second, the order of the types of knowledge they receive is most often the reverse of the order of types according to their actual needs – the most needed knowledge about mechanisms is the least often received. However – as can be seen above – evaluation units do not see things the same way.

This observation may lead to the conclusion that evaluation units have not developed close enough relations with their users to ensure a mutual understanding of needs, and a sense of ownership of evaluation knowledge. The distance between evaluation units and users is also suggested by the response level to CAWI_U. A total of around 230 responses is not a satisfactory outcome when we estimate that the potential population should exceed 20007.

7 This is the number we come up with if we assume that an evaluation unit should, on average, have more than 10 potential users. It is also worth noting that in reaction to our request, some evaluation units admitted they cannot decide who their users might be and therefore do not know to whom they could forward the survey. On the other hand, we suspect that at least some units, burdened with the

41

Page 42:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Evaluation units declare that their users are mostly employees of their own institution – managers of other units, department directors. Only in the case of HUN and ROM8, senior public administration staff are given more priority than unit managers. In other countries, both groups share the same importance. Other institutions in the CP implementation system are the third most popular type of user. They are appreciated most in HUN and SLO9. In no country is there a strong conviction that evaluations produced by national systems often serve local politicians (e.g. ministers, parliamentarians) or EU institutions.

Figure 17 Who the main users of evaluation units’ work are (CAWI_KB)

Managers of other units in our institutions

Senior public administration staff (e.g. department directors)

Our political leaders (e.g. ministers, members of parliament)

Other institutions in Cohesion policy implementation system

Institutions at EU level

Public institutions dealing with other policies

Media & general public

0% 20% 40% 60% 80% 100%

43%

45%

9%

23%

16%

5%

5%

20%

30%

16%

22%

14%

15%

12%

23%

12%

30%

28%

27%

28%

22%

8%

12%

24%

20%

27%

30%

36%

5%

1%

20%

7%

16%

22%

24%

5 - Very Often 4 - Often 3 - Sometimes 2 - Rarely 1 - Never

Source: Own research

The structure above is roughly in accordance with declarations from users. 70% of them identify themselves as civil servants in government (that includes regional governments10, and a small share of municipalities and other public agencies). 9% of respondents represent NGOs, 4% (each) private sector and universities / research institutes11.

Although labeled as ‘evaluation users’, people responding to our CAWI_U rely on many sources of information, of which evaluation is not necessarily the most important one. We asked them about this issue with reference to the three types of knowledge already introduced in this chapter.

Figure 18 Main source of knowledge about program IMPLEMENTATION (CAWI_U)

task of contacting their users with a survey link, decided to focus on just a limited number of priority users.8 As we learned in Chapter 1 these two countries share a centralized approach to the evaluation system.9 Again, countries with a centralized system.10 In Poland, where evaluation units operate also at this level.11 We have not included these as separate options in CAWI_KB.

42

Page 43:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

evaluation studiesphysical monitoring financial monitoring

project controlsexternal controls

own experience, discussion with colleaguestrainings, postgraduate studies

conferencescontacts with program beneficiaries

cooperation with other entitiescooperation with foreign entities

cooperation with entities outside CPmedia news

scientific literatureother research, analyses

0% 20% 40% 60% 80% 100%

20%29%28%

22%21%

42%9%

15%22%21%

7%7%7%

4%15%

30%31%

31%32%

29%37%

25%31%

31%34%

21%22%

15%23%

34%

28%25%25%

26%28%

15%29%

32%27%25%

31%33%

28%28%

31%

18%9%11%

11%13%

4%20%

16%14%15%

28%21%

30%28%

15%

4%7%5%

8%10%

2%17%

7%7%5%

14%16%

20%18%

5%

5 - strongly agree 4 32 1 - strongly disagree

Source: Own research

Only half of respondents declare that they learn about implementation process from evaluation studies. That places evaluation as the 7th most popular source – in the middle of the 15-item list. When dealing with the implementation process, respondents rely mostly on their own experience and discussion with colleagues from the team. The second choice is monitoring data – both financial and physical, followed by the results of project controls and contacts with program beneficiaries or other entities. This pattern applies to almost all the countries studied. Only in CRO are contacts with beneficiaries and other institutions valued more than monitoring data. As for evaluation, the highest shares of respondents appreciating this as a source of process knowledge are in CZE and ROM.

As might be expected, the declared role of evaluation is greater when it comes to gaining knowledge about program effects, with 57% of respondents declaring it as useful. Yet it is still not the leading source. Again, respondents argue that they learn most about program effects from discussion with colleagues from the team and their own experience, and the second source is physical monitoring. Evaluation shares 3rd place with financial monitoring12. Once again it is ROM where most respondents find evaluation useful (74%).

12 Which tell us a bit about how some respondents define effects.

43

Page 44:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Figure 19 Main source of knowledge about EFFECTS (CAWI_U)

evaluation studiesphysical monitoring financial monitoring

project controlsexternal controls

own experience, discussion with colleaguestrainings, postgraduate studies

conferencescontacts with program beneficiaries

cooperation with other entitiescooperation with foreign entities

cooperation with entities outside CPmedia news

scientific literatureother research, analyses

0% 20% 40% 60% 80% 100%

25%24%25%23%

19%33%

11%14%

24%16%

7%7%6%6%

17%

32%39%

35%35%

33%43%

21%33%

30%36%

27%24%

17%19%

32%

24%23%

23%25%

29%15%

27%33%

29%28%

33%35%

29%29%

34%

13%10%

12%11%

11%6%

27%14%

10%15%

21%22%

32%26%

11%

5%4%6%7%8%

2%14%

7%6%5%

13%11%

17%20%

6%

5 - strongly agree 4 32 1 - strongly disagree

Source: Own research

Survey results in the area of mechanisms of change are quite similar to the two previous types of knowledge in the sense that again, the most popular source of information appears to be discussion with colleagues from the team and own experience. The next two choices are current contacts with program beneficiaries / applicants and physical monitoring. Evaluation is placed 4th with 55% respondents declaring that they use it. Again ROM and CZE appreciate evaluation most (2nd choice).

Few conclusions may be drawn from the results described above. It is good that respondents rely on evaluation more when searching for information about effects and mechanisms than processes. As previous studies have proven, evaluation utility in operational management is limited. It takes too long to receive findings (Kupiec, 2015a), so in the end evaluation only repeats what has already been learned (often based on feedback from users themselves), or what has already been changed (Olejniczak, Kupiec, Newcomer, in preparation; Kupiec, 2015b)13.

The bad news is (at least for those who care about evaluation system efficiency) that although better than in terms of processes, the importance of evaluation in terms of effects and mechanisms is still low. This is another reason to suspect that there is a discrepancy

13 The studies that we refer to here are based mainly on the Polish context, but the problem with evaluation timing is common, and as part of this study respondents from all countries complained about delays due to public procurement law and complementing internal procedures (see chapter 4.2).

44

Page 45:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

between what evaluation units provide and users demand, whether it is due to a problem with needs identification or the capacity to meet them.

The relatively high position of financial monitoring in terms of effects and mechanisms is worrying news, suggesting that a large number of users equate effects with smooth implementation and spending.

The dominant position of own experience and discussion with colleagues, which is declared as a leading source for every type of knowledge, may be interpreted as a manifestation of lack of openness to signals from outside. Such openness is a prerequisite for the learning of an organization.

The conclusions from analyses and research conducted or commissioned by other institutions are not used as a source of information of any type. This might be another symptom of lack of openness to signals from outside. Because of this, all evaluations and other analyses do not constitute one large common source of knowledge, where findings and conclusions complement and verify each other. We are dealing with small fragmented repositories used only by one institution, often duplicating knowledge that was already gained somewhere else.

Figure 20 Main source of knowledge about MECHANISMS (CAWI_U)

evaluation studiesphysical monitoring financial monitoring

project controlsexternal controls

own experience, discussion with colleaguestrainings, postgraduate studies

conferencescontacts with program beneficiaries

cooperation with other entitiescooperation with foreign entities

cooperation with entities outside CPmedia news

scientific literatureother research, analyses

0% 20% 40% 60% 80% 100%

23%23%

19%18%16%

30%9%14%

27%15%

8%7%

4%5%

17%

32%34%

39%39%

35%40%

21%26%

31%33%

26%25%

17%19%

29%

28%26%25%27%

28%21%

32%34%

25%34%

33%35%

27%29%

33%

12%11%9%9%

14%4%

20%16%

11%12%

20%20%

30%28%

16%

5%7%7%7%7%5%

17%10%

7%6%

12%13%

21%19%

5%

5 - strongly agree 4 32 1 - strongly disagree

Source: Own research

45

Page 46:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

4 FACTORS THAT INFLUENCE BROKERS' PERFORMANCE

The factors influencing brokers’ performance were analyzed in two groups:

1) internal capacities - attributes of evaluation units, and 2) elements and factors of the external environment that units do not control but which

nevertheless have an impact on units’ performance.

In both cases, we measured both the strength of the influence (great/no impact) and its character (positive/negative impact) in an attempt to identify the main enablers and blockers of brokers’ performance.

4.1 CAPACITIES OF EVALUATION UNITS

Respondents agreed that all internal capacities that were proposed in the survey are of at least moderate impact on their performance. The least important appeared to be tools (e.g. IT, library resources), and rules (e.g. routines applied, procedures for contracting studies) available to evaluation units. Highest impact was attributed to skills & experience of units’ staff.

The large majority of evaluation units in all countries admit that the budget they possess is sufficient and does not limit evaluation activities. However, at the same time, IDI respondents from most countries complain about budget availability, pointing to the complexity and long duration of procedures for spending money (e.g. on commissioning studies, organizing conferences with desired speakers). This explains why internal rules are found in the negative part of the chart14.

The second and last factor with a positive impact on the evaluation activity of the tested units are the skills and experience of their staff. However, the character of the impact is not that apparent15. On a few occasions, IDI respondents admit that although their teams are capable of drafting good quality TOR, they lack some other skills, such as presentation skills in extracting the essence, important stuff from a report and presenting it to the audience. In other cases, lack of skills is named the reason for the low number of impact evaluations and for relying on price as the sole selection criterion in public procurement. Half of the studied countries also report that there is excessive staff turnover that undermines the experience level of the unit as a whole. In one case it said that evaluation is viewed as of little importance and faces negative selection, as most promising candidates choose other fields of administration.

The potential positive impact of skills and experience on the effectiveness of evaluation units is compromised by the insufficient number of staff and time available. These two factors are interconnected. Obviously, the number of employees determines their perception of how much time they have, and what activities they can pursue. As one IDI respondent put it We could do more and better if we had more time, and we would have more time if there were more of us.

Lack of time also undermines the possibilities and motivation for self-development and eventually limits the increase of skills. Some respondent see time as the worst influencing factor. I was not able to go to any CRIE events or conferences this year. Another says Due to

14 Romania is the only country where, on average, internal rules are attributed as a positive impact. 15 An average score of 3.3 on a scale of 1 (negative ) to 5 (positive).

46

Page 47:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

lack of time and excessive duties I have not taken part in any training for 2-3 years, although I like it and we have financial resources.

Figure 21 How capacities influence brokers’ performance16

great impact

averageimpact

negativeimpact

positiveimpact

noimpact

*factors marked in red were rated in IDIs contrary to CAWI_KB scoring17

Source: Own research

4.2 THE ENVIRONMENT OF EVALUATION UNITS

As in the case of capacities, respondents agreed that all environment factors have at least a moderate impact on their performance. Unlike capacities, the strength of impact of all environment factors was assessed very much the same. Slightly higher importance was attributed to The extent to which users of our work are interested in supporting their decision with research evidence and the lowest to Domestic legal context - regulations (e.g. public procurement regulations, public finance regulations).

In terms of the character of the impact we can find the regulatory framework at both extremes of the axis. While the domestic legal context is perceived as having the most negative impact, EU regulations are praised for having the most positive one. All IDI respondents blaming domestic regulations referred to public procurement law. As many mentioned that it is difficult to specify the particular needs and requirements of an evaluation study under public procurement regulations, as well as assess their quality. One of

16 All factors were rated on a scale from 1 to 5, but all average scores were within the boundaries 2-4, so the horizontal axis scale is also from 2 to 4. 17 Some deficiencies in skills and experience were observed by both respondents from the coordinating body and external experts, see chapter 4.3.

47

Page 48:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

respondents admitted that there have been some improvements in recent years but there is still this ingrained idea about “you can’t really judge quality’. The procedures are also time consuming which makes it difficult to deliver evaluation on time. It is worth mentioning that the complaints regarding public procurement law correspond with the remarks regarding internal procedures, which also related mostly to public tendering.

The only other regulatory limitation mentioned on a few occasions was the regulation on personal data protection that sometimes makes it hard to conduct a study, especially impact evaluations.

Regarding EU regulations, the majority of respondents agreed that it was the key if not only driver for the development of evaluation practice in the country. The evaluation requirement in EU regulation assures financing and advocates the efforts of evaluation units.

A few respondents pointed to some minor improvements e.g.: joint ex post evaluation projects of EC-MS (at the moment the EC does not use MS reports and produces its own studies of questionable quality), more stress on accountability for learning from evaluation, not only compliance and production of reports.

As one can see, no demands for significant reforms were posed, although those responsible for coordinating each of 8 studied national evaluation systems were specifically asked for that. And even the statements we presented above are questionable. The regulation currently in force requires each priority axis to be evaluated, but not necessarily separately (Regulation 1303/2013, art. 56) and this also applies to the European territorial cooperation goal. The possibility of conducting ex-post evaluations by member states in close cooperation with the EC is already included in the regulation (1303/2013, art. 57). Finally, the necessity to actually learn from evaluation, although evident, is difficult to regulate.

The majority of CAWI respondents believe that users perceive their work as credible and (although a little less) useful. However, the picture emerging from IDI is less optimistic, as the majority of stories regarding users’ attitudes have a negative tone. The most striking statements argue that nobody wants to read evaluation reports, nobody really needs them. Another observes that evaluation use and learning from it takes places mostly among those directly involved in program implementation at the lower level of the organizational structure, and becomes less apparent with every step up through the ministers, politicians, decision-makers, members of monitoring committees and finally to the EC.

The reason that evaluation studies are not perceived as useful is, according to some respondents, the fact that evaluators highlight, and put in written form, problems that are usually already known, but do not offer solutions to them. Others raise the issue of the unsatisfactory quality of studies.

Another set of determinants of attitude toward evaluation suggested by respondents is connected with the lack of strategic approach to ESIF management and public management in general. Planning gives way to solving the current, usually operational problems. The actual impact of programs loses the competition for interest against the smooth implementation process and the level of spending. Politicians appointed to office come with a set of beliefs that are more important than the evidence, and operate in the short perspective, even shorter in countries where frequent changes of power occur.

Figure 22 How the environment influences brokers’ performance18

18 All factors were rated on scale from 1 to 5, but all average scores were within the boundaries 2-4, so the horizontal axis scale is also from 2 to 4.

48

Page 49:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

EU context - regula-tions

Domestic legal con-text

Perceivedcredibility

Perceivedutility

Evidence based decisions

External producers' capacity

great impact

averageimpact

negativeimpact

positiveimpact

noimpact

*factors marked in red were rated in IDIs contrary to CAWI_KB scoring

Source: Own research

External producers’ capacity is the second factor, next to the domestic legal context, with a negative impact on evaluation practice, according to the majority of respondents. In most countries the problem of an insufficient number of companies capable of delivering quality evaluation is reported. In some, less developed, markets the situation is getting better, as especially domestic contractors are slowly gaining experience. However, in other cases the opposite, negative trends are reported, with some quite high quality suppliers having abandoned the market or the situation having settled with a few consortiums controlling the market and a lack of competition.

Respondents operating outside the Cohesion Policy system point to the fact that the supply side adjusts to demand, so flaws in the supply capacity often reflect a weakness in demand. In more than a half of the studied countries lack of sustained demand prevents the development of quality national evaluators, as there are not enough projects to make a living from evaluation, not to mention specializations within it. In other cases, even when the number of procurements is high and stable, it is the demand side that could be blamed for not requesting much quality, or accepting poor quality.

49

Page 50:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

4.3 SYSTEMS STRENGTHS AND WEAKNESSES

This subchapter is based only on opinions expressed during interviews and mostly by external experts who are not directly involved in the Cohesion Policy implementation system (not employed in any institution of the system19).

Most factors named by the experts relate directly to the issues discussed in the two previous parts of this chapter on the capacities and environment of evaluation units. As can be seen below, the expert perspective is slightly more critical than that of evaluation units.

lack of skills and experience both on the supply and demand side20; limitations of public procurement law (duration of procedures, difficulty in demanding

quality leading to even good companies leaving the market); poor quality of studies insufficient demand in countries with a small number of contracting authorities, not

supporting development in terms of number, quality and specialization on the supply side

Other issues mentioned by the experts, sometimes more country specific, include (in the case of positive remarks, we specify the country they relate to):

difficulty in achieving credible conclusions at the program or priority axis level due to a lack of studies at lower, more detailed levels (e.g. project level),

experts academics with international experience that can contribute to the evaluation system21 (HUN),

lack of coordination and cooperation in the system, due to the lack of involvement of a coordinating body,

good cooperation both within the system and with outside actors, like an evaluation society, units from other EU policies (CZE),

framework agreements which, in the case of countries with insufficient demand, intensify the problem of limited opportunities for the development of the supply side,

limited ability to identify and react to the problems of MA – in case of the centralized system with single evaluation unit located outside the structure of MA.

lack of time to reflect on, and get the most out of, every study – in the case of systems producing a large number of reports

19 But often they are operating as evaluator and/or are a former employee of an institution within the Cohesion Policy implementation system. This is a direct example of staff turnover that was discussed in chapter 4.1. However, it is not necessarily harmful when experienced people change sides of the market, but stay inside the system. 20 Although there was also an expert opinion praising the skills & experience of the evaluation unit that he was once part of. 21 Yet, this potential strength was not supported by a clear declaration that this is actually happening, and that experts and academics substantially contribute to the system. On the other hand, there are reports from other countries that the engagement of academics in the evaluation system is not satisfactory.

50

Page 51:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

5 IMPLICATIONS FOR PRACTICE

WHERE WE ARE AND WHAT CHALLENGES WE FACE

The picture that emerges of evaluation practice for Cohesion Policy in the eight countries from our study is of high diversity. This diversity is a natural result of at least three factors: (1) differences in organization of the management structure of the Cohesion Policy system among countries, (2) the maturity of CP systems (officially established in 2004, 2007, 2013), and (3) the degree of continuity or turbulence in evaluation units (mainly turnover of evaluation staff the ability to keep an institutional memory).

Despite the high diversity, comparative analysis allows us to point out certain shared patterns and similar experiences on (1) who the users served by brokers are, (2) what evaluation units mainly do, and (3) what influences their performance. First, the main users of evaluative knowledge are managers of other units in Cohesion Policy institutions or senior public administration personnel involved in Cohesion Policy. Thus, we can conclude that evaluation units are focused inward - they work for actors within the implementation system of Cohesion Policy.

The findings on what knowledge is produced and how positive the responses from users are is somehow contradictory. Self-declarations derived from evaluation units in the survey indicate that a balanced spectrum of knowledge is produced and that users’ needs are met. Meanwhile, the interviews and response rate from the users survey show rather limited interest in evaluation and its utility.

Second, when it comes to the activities of evaluation units, they are a logical consequence of the focus on internal users of the Cohesion Policy system. Knowledge needs come from observing the program implementation process or cooperating and exchanging information with staff involved in Cohesion Policy implementation. Evaluation staff rarely follow ongoing scientific or political debates or even search for inspirations from other studies. This is a logical consequence of focusing primarily on users within the implementation system. Evaluation units acquire knowledge mainly by contracting out studies or interpreting monitoring data from CP programs. Situations in which the staff of evaluation units conduct studies by themselves (full empirical studies or rapid reviews) are very rare. And dissemination of knowledge follows standard procedures and platforms designed for the implementation and coordination of Cohesion Policy. It is worth pointing out that evaluation units perform more than just evaluations, they implement other analytical activities such as monitoring and strategic programming.

Third, the short list of main blockages and facilitators of evaluation units' work is quite consistent across countries. Shortage of staff and time pressure (usually the result of sharing evaluation activities with other responsibilities) are the main problems. The available budget and financial resources for studies are the most positive factors, followed by skills and experience built up over the years. In the environment, the most problematic issues are national legal regulations (procurement law) and the limited capacity of knowledge producers. The most positive factors for the units' work and performance are EU regulations (pushing national public policy systems to execute evaluations).

It is worth pointing out the positive impact of the evolutionary improvements of EU regulations related to monitoring and evaluation requirements. EU regulations have had a positive impact in three ways: They have forced the implementation of evaluation practice and this has led to the production of evaluation evidence. The regulations have also required the establishment of formal forums for the presentation, discussions and use of evaluation

51

Page 52:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

findings (steering groups, monitoring committees, etc.). Looking more broadly, this set-up has made Cohesion Policy and their evaluation units, in all of the studied countries, the outpost of evidence-informed policy.

The main challenge that emerges from all studied systems is the gap between the production of evaluation studies and their relatively limited use and perceived utility for policy learning and the improvement of socio-economic programs. To put it simply, evidence is produced as a routine part of administrative operations, but is rarely used to construct effective Cohesion Policy.

This problem is not unique to the evaluation of Cohesion Policy. In fact, it is common across all sectors and public administration organizations trying to apply evidence-based policies, and it has been here since the early days of evaluation practice (Nutley et al., 2003; Prewitt et al., 2012; Weiss, 1977, 1988).

In our opinion, the core of the challenge lies in the behavioral change of potential users of evaluation evidence. This means there is a need to establish in users the habit of actual, not symbolic, use of evidence when making decisions. Without this change, the continuous production of evaluation evidence will became a bureaucratic ritual and burden.

Modifying behaviors is a gradual and difficult process. A sustainable behavior cannot be forced by formal regulations only, it has to be built and driven on intrinsic motivation. Therefore, a new strategy is needed.

THE UTILE STRATEGY FOR KNOWLEDGE BROKERS

We propose transforming evaluation units into knowledge brokers - the animators of reflexive policy learning that support decision makers with research-based knowledge. Thanks to this learning, Cohesion Policy programs and projects could be more effective in developing communities and serving citizens.

The development of knowledge brokering has already been taking place in public policies, including Cohesion Policy (Olejniczak et al., 2016). Recently, the US Commission in its report to the Congress and The President pointed to knowledge brokering as playing an important role within the evidence-building community and an inevitable condition for evidence-informed policies (US Commission on Evidence-Based Policymaking, 2017).

We propose transformation that is incremental. This means building on existing elements, within existing regulatory and institutional frameworks, and in alignment with the current responsibilities, activities and evolution of evaluation units. It is based on a recalibration of the mindset of evaluation units towards users, which in turn, would gradually lead to the modification of users’ behaviors.

Our idea of a small-steps, incremental strategy has a high chance of effectiveness because it is aligned with human behavior (preference for the status quo), organizational dynamics (rigidity of institutions, non-linear dynamics of small innovations triggering system transformation).

52

Page 53:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

The simplified theory of change of our proposal is as follows:

Figure 23 The theory of change for knowledge brokers

Source: Own elaboration

The core of our idea is the UTILE strategy.

"UTILE" is a quite obsolete and rarely used adjective describing something that is advantageous - beneficial in involving or creating favorable circumstances that increase the chances of success or effectiveness (New Oxford American Dictionary). We use UTILE as a Mnemonic term to describe a strategy that could be applied by evaluation units to become effective knowledge brokers.

User-oriented - focusing on those who will use knowledge, not on the production of reports for their own sake

Timely - delivered on time, when users consider making the decision

Interesting - meeting users' information needs

CredibLe - trustworthy and valid in the eyes of the beholder - the user

Easy - Easy to find and absorb.

The degree of engagement in certain activities would depend on the strategic choices made by evaluation units in reference to their current position in the system and resources. Therefore, the details are flexible and can be matched to suit different evaluation units.

IF evaluation units, acting as

knowledge brokers, start applying the

UTILE strategy

THEN it will trigger in users of knowledge an interest in

utilizing research-

based evidence...

Once users experience benefits of Knowledge

Brokers support...

Then users will more

frequently use research-

based evidence in

their decision-making...

AND THAT will lead to

improvement in the viability

and effectiveness

of public programs

53

Page 54:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

However, the overall logic is the same for all evaluation units in the eight countries studied. We discuss it below as a 5-step procedure.

The core of the strategy is to build a habit in users that is based on intrinsic motivation, not external regulatory requirements. The benefits users can experience are a clearer spectrum of choices for their decisions and a stronger evidence base that make their choice easier to defend. We also hope that the positive experience of users with research results will create a positive feedback loop - users will demand more studies of high quality, which will further improve the quality, and relevance of produced knowledge.

STEP 1: FOCUS ON USERS

Making activities "user-oriented" means focusing on those actors within or outside the Cohesion Policy system that can learn and utilize the research evidence provided to them by broker. The key question that evaluation units need to address is: What users do we serve and with what purpose?

Evaluation units acting as knowledge brokers in the complex Cohesion Policy system can serve different users for different purposes. This leads to certain trade-offs that evaluation units have to address in order to effectively organize their activities and resources. We propose the following framework to facilitate this strategic reflection. The choices available for evaluation units can be mapped in two dimensions (see: Figure 24 below). The first dimension relates to types of knowledge units can deliver. The choice is between more strategic, broader knowledge (effects of the interventions, mechanisms that explain success or failure of programs), and more operational, technical and process oriented studies.

The second dimension relates to the primary purpose and audience. Evaluation can be intended for the actors within the implementation system. In this case its primary function is learning, understood as improving strategic and operational policy activities over time. Or evaluation can be intended mainly for external audiences - policy stakeholders. In the case of Cohesion Policy these are the European Commission, EU net payers, public opinion and media. In this situation, evaluation holds policy implementation staff accountable to the stakeholders.

These dimensions create four options. Of course, in the reality of Cohesion Policy, evaluation units try to cover more than one dimension. However it is important to be aware of the trade-offs and potential tensions since each of these evaluation types requires different levels of certain resources, skills and combinations of activities.

We strongly suggest that evaluation units undertake strategic reflection and choose which option will be their primary focus (or is required from them). This would allow units to tackle the trade-offs more consciously and effectively, , and in the case of every study focus their limited resources on one, maximum two, key users that are the addressees of the knowledge brokers’ work.

54

Page 55:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Figure 24 Strategic dilemmas for evaluation

Source: own elaboration

In OPTION A the broker’s role is to help managers of Programs to get a more balanced and objective view of the on-going situation and possible improvements in implementation. So in this option the work of the broker is directed inward, to users who are within national or regional institutions responsible for program implementation.

In this option, evaluation units help managers to see the bigger picture and understand patterns in data and trends in changes, as well as to tackle their heuristics of availability - that is, avoid making assumptions based on single stories from beneficiaries, and getting more a balanced and representative picture of reality instead.

In this option, evaluation units mainly animate the systematic data-driven reflection based on monitoring data (Hatry, Davies, 2011). They could perform such activities as: quick surveys and analysis of existing monitoring data, constructing dashboards that visualize trends in program performance (and bring together different indicators), running sessions, that guide the thinking of program managers towards more explanatory questions, and searching for mechanisms that explain current implementation bottlenecks. This option also includes running satisfaction surveys with beneficiaries or helping managers to design and run these types of surveys (for example surveys of applicants in relation to monitoring the perceived level of complexity of the application process).

The main resources needed for this option are: skills in basic data analysis (descriptive statistics), skills in preparation of visual, interactive dashboards (e.g. use of Tableau or other software for interactive data visualization), and familiarity with basic scenarios for running data-driven discussions in organizations.

55

Page 56:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

OPTION B is focused on accountability for timely and legal spending. We believe that evaluation studies can bring little value added here because this area is well covered by control activities, performance auditing, extensive monitoring systems developed at the regional, national and European level of Cohesion Policy.

Furthermore, getting involved into these types of assessment build the perception of evaluation units as softer types of control, which is, in our opinion counterproductive and harmful.

OPTION C is focused on accountability for effects. The ex post evaluations are primarily in the competence of European Commission, but recent changes in regulations indicate a strong push towards making more impact evaluations at national level. There is a space here for activities of the national evaluation units. Their studies could be aimed at showing to the public and main stakeholders the value for money of EU co-financed interventions. However, two issues could potentially limit evaluation units' actions in this area. First, stakeholders (especially the media and the public) could perceive units as not fully independent and therefore not impartial, since they are located within the implementation system whose outcomes they try to assess. That could decrease the credibility of evaluation studies in the eyes of these knowledge users. Second, assessing long-term effects requires studies that go beyond one program and one programming period, as well as an ability to retain the knowledge gained in ex post assessments from one programming period to the next. This requires the institutional continuity of evaluation units. This is often not the case, since the units are parts of Managing Authorities assigned to the particular Operational Program. With new programming periods, a new structure of programs is proposed, often followed by a new organizational structure of implementing institutions.

In order to perform this function, evaluation units need: (a) advanced skills for designing and running RCTs and other counterfactual research design studies, (b) advanced skills for designing or contracting out and supervising comparative studies, (c) institutional stability across programming periods, and (d) strong institutional independence, ideally being placed outside the structure of individual operational programs.

OPTION D is, in our view, the most promising for evaluation units. Evaluation studies could provide program managers - both strategic and operational staff, with insight on the actual effectiveness of the theories of change that underlie certain intervention strategies. And this, in turn, would allow for interventions to be corrected "on the go", recalibrating them to target populations and changing mechanisms that respond better and give better positive effects. In this field, only evaluation performed by national and regional evaluation units could do the job. That is because these units are close enough to managers to react quickly to program proceedings. The formal independence of the evaluation unit is not a requirement here.

However, this last option also requires evaluation units to tackle additional challenges. First, evaluation units need to educate their users in the implementation system. Our research shows that program managers are confusing products with effects. Evaluation units need to explain the difference and show managers the advantages of looking beyond the checklist of products, to the strategic goal of social change. Moreover, evaluation units need to raise awareness among managers about the importance of knowing why things work or don't work - the mechanism that drives beneficiaries’ reaction to the provided aid. This knowledge is crucial for the eventual success of implemented programs.

Secondly, evaluation units will have to work on the timing of their evaluations. They need to be on time with the explanation of mechanisms and the findings from the initial effects of interventions, in order to provide enough time for managers to react and implement recommendations in programs while they are still running.

56

Page 57:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Thirdly, evaluation units need to slowly push for thinking about particular OPs or their parts as examples of types of interventions. For the learning and transfer of general "what works and why" knowledge, it is crucial to remove the labels of the unique program name and instead focus on program "Theory of Change" (often based on similar intervention mechanisms). That would allow evaluation units to build a knowledge portfolio that works beyond a particular programming period and even beyond EU structural funds.

We believe that, potentially, each option could be pursued in all system settings (e.g. with strict or loose regulation, active / passive or missing coordinating body) and structures (e.g. centralized or decentralized). However, some systemic choices might favor or increase the probability of success of a particular option. The way evaluation units are formally positioned in Cohesion Policy implementation also presents certain advantages and challenges in performing particular functions. A decentralized system, with evaluation units located within MAs (or even IB), seems better suited for learning, as it assures proximity to users and their problems / needs (option A&D). A centralized structure, with one evaluation unit located in a separate, sometimes supervising institution, makes it difficult to develop the close relations needed for learning. At the same time, it assures an objective perspective and allows for the independent evaluation that is required in fulfilling the accountability function (option B&C). It also provides space for strategic, more long-term reflection. A system, with a single central unit and network of OP units in Mas, makes it easier to gather enough capacity to conduct more demanding tasks that are characteristic for evaluation dealing with the impacts and mechanisms of change (option C&D). A central unit can act as a node that collects lessons from experiments with different types of theories of change applied across operational programs. Small, less equipped units would deal better with relatively simple process evaluation (option A&B). Highly regulated systems obviously lead to a focus on formal compliance – which is closest to option B and furthest from option D.

STEP 2: UNDERSTAND THE USER’S JOURNEY

Making produced knowledge "timely" means providing the right evidence-based insights and at the right moment - when user needs information to make the decision. In order to make things "timely" knowledge brokers need to address following question: "What are the stages of the decision making process that users go through and at what decision points do they need insight?".

Recent developments in industrial and service design provide us with three very useful methods we can apply to our evaluation practice called "journey mapping" (Kumar, 2012; Stickdorn, Schneider, 2012). We can map the typical "journey" of our users. These are the stages and interactions they go through during their typical decision making process related to programs or policies. Mapping the journey of the user helps brokers in identifying potential "touch points" - these are the moments when evidence from evaluation and other research are most needed by user.

Two other tools accompany the journey map. Profiles of the users help describe their background, emotions, preferences towards communication forms and channels, ways of being approached and briefed, as well as their time and resource constraints. Finally a stakeholder’s map illustrates the network of other actors that influence and interact with the key user along the decision-making path.

We recommend internal workshops in evaluation units devoted to developing these three elements for each type of knowledge user (MPs, directors of departments, etc.) that are chosen as the primary targets of evaluation units' work. The profiles and preferences of users

57

Page 58:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

can be reconstructed based on short interviews with users, as well as observations and experiences collected by evaluation unit staff. User profiling and persona development procedures have been already applied in the context of public sector services (e.g. Kimbell, 2015; Liedtka & Ogilvie, 2011; Stickdorn & Schneider, 2012). However, evaluation units from V4+4 countries could work together to adapt these tools and processes to the specificity of their jobs (for example, by organizing a creative workshop during the joint session of the evaluation network in Brussels).

Application of this simple toolbox can substantially help in increasing the uptake of evaluation results. Understanding the users will help evaluation units to provide a timely response to users' knowledge needs, and to find the right leverage point when feeding evidence into the decision-making process.

In thinking about users, we should also consider to what extent we want to follow their current preferences (reactive strategy), and to what extent we want to undertake efforts to build their demand, educate, raise awareness and increase their interest for evidence use (proactive strategy).

STEP 3: DEVELOP A LEARNING PORTFOLIO

Making knowledge "interesting" for users means addressing their knowledge needs, and providing answers to issues that help them proceed with decisions. The key question that evaluation units need to address here is: What do users want to learn?

Our suggestion is to change wording and the logic it follows. Instead of "evaluation plans", knowledge brokers should start developing, in cooperation with their users, "Learning agendas". Words really matter. "Evaluation plan" from the user perspective sounds like an administrative requirement, an external distraction from their activities. It is somebody else’s plan they will have to help realize. Furthermore, it indicates focus only on evaluation studies. In reality, as numerous studies show (including the current report) actors of public policy organize their learning and knowledge around issues, not sources of knowledge. Thus, they use and combine different knowledge sources to tackle their implementation challenges.

The "Learning agenda" would allow users to articulate what knowledge needs they have in relation to the implemented program and when they need it. The learning agenda of users should be organized around programs or subprograms (what do we want to learn about program X). Over time, this would allow the creation, of "learning portfolios" for particular program types.

This approach and wording was recently recommended to the US Congress and the president as a way of integrating different data and evidence production activities within the federal government (spanning from performance management, policy analysis to evaluation) (US Commission on Evidence-Based Policymaking, 2017).

We also have a tactical suggestion for evaluation units. In terms of initial learning activities, it would be optimal to focus on (1) those issues that are relatively easy to address and (2) users who are willing to cooperate. This is the "low hanging fruit tactic". A quick success achieved with an active user will build credential for the knowledge broker and provide a testimony – a "real life example" for less convinced or passive users. This approach worked well for the UK Nudge Unit (formally called the Behavioral Insights team) when developing an internal service for UK ministries for pilot testing policies with the use of behavioral insights and Randomized Control Trials, in counterfactual studies commissioned by the EU.

58

Page 59:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

STEP 4: CO-DESIGN SOLUTIONS

The “credibility” of provided knowledge lies mainly in the eye of the potential user and is determined by number of factors: the quality of the study (scientific validity and basic factual accuracy), the applicability of findings (also called action orientation), trust in a source (the reputation of an information producer, surface credibility of a text), and conformity with existing evidence. We know that studies substantially challenging existing organizational practices and giving too generic recommendations are often ignored (Miller, 2015; Weiss, Bucuvalas, 1980). Thus, knowledge brokers need to address the question: "How can we make recommendations more practical for the users?".

Our proposal is to focus not on writing recommendations but on developing, together with users, actual solutions. We propose integrating into evaluation contracts a procedure for Service Design sessions (IDEO, 2012; Kimbell, 2015; Liedtka, Ogilvie, 2011).

In Service Design (SD) there is a stage of exploration and understanding that matches the stages of observation and analysis in applied research studies. However, SD also includes a stage of prototyping, which usually means a cycle of interactive workshops devoted to designing solutions to the identified issues. Prototyping is followed by testing - often testing with the use of experiments (randomized controlled trials) (Haynes, et al. 2012).

Transferring this idea into evaluation practice would mean incorporating design workshops into evaluation contracts. They should be run with stakeholders and users of studies. This would mean that traditional exploratory and analytical study would take up 60% of the contract time while 40% of the contract should be devoted to the development workshop. The SD component can be estimated as 30% of the total budget of the evaluation study (depending on how many workshops and what kind of testing method is applied). Based on SD experience we can identify 4 types of workshops implemented in a sequence:

(1) a workshop devoted to the analysis of evaluation results. Its aim is to identify 2-3 conclusions that are crucial for users and the Program.

(2) a creative workshop focused on the development of solutions and early prototypes

(3) a craftsmanship workshop devoted to work on details of the solutions and ways of implementing the testing

(4) workshops, field experiments or game simulations devoted to testing the prototyped solutions - confronting them with reality and stakeholders.

This approach has a number of advantages. The users bring their insights into organizational dynamics and experience and that make the proposed solutions more realistic. During the process users also become the co-authors of the recommendations and this strengthens their engagement and commitment to implementing solutions. Finally, the testing process, especially if built on RCTs, provides strong evidence on what works, which means high validity in terms of methodological rigor.

This practice has recently been implemented, with success, by the Polish Agency for Enterprise Development, and the city of Poznan.

59

Page 60:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

STEP 5: MAKE KNOWLEDGE EASY

Making learning "easy" means improving both its physical accessibility and visual attractiveness. The key question is: How can we improve the accessibility of knowledge for users?

In our opinion, the answer to this question has two aspects. The first one relates to storing study results. Evaluation databases should be more than simple repositories. They should allow for contextual searches (indicating a short answer to a question and sources were more details can be found) and provide tools for analyzing data in any cross-section. It would also be advisable to integrate in one place the institution’s own knowledge with sources from other entities. An example of such an approach can be found in the US Environmental Protection Agency, where a database is to be organized in the form of knowledge clearing house. Apart from the repository and the software facilitating access, it will be based on cooperation with various institutions, like UNEP and the Rockefeller Foundation. The database is to be used as a tool to find answers to questions in areas where EPA does not conduct research, so where possible, it should lead to the use of existing knowledge instead of generating further research. Yet another idea would be to organize database search like in academic databases with key words and research topics (Scopus, Web of Knowledge).

The second aspect relates to the form and channels of knowledge dissemination. To increase the probability of knowledge reaching the intended user:

each report should have an executive summary of three quarters of a page that allow results to be read in 30 seconds,

the results of each report should be visualized with professional infographics, recommendations should be grouped into two types:

o the technical issues that concern the change of regulationso the strategic recommendations (developed by the service design process

described in the previous step) the number of recommendations should be kept under 10.

In addition, we propose the use of unconventional solutions, which will strengthen the communications of evaluation reports:

in presenting the results of the study, give examples of specific individuals in the foreground as the case studies for whom the context is statistics. Recipients always pay more attention to personalized respondents, and secondly to statistics. (Mother Teresa once said: "If I look at the mass, I will never act."),

use storytelling in presenting the results of some research (https://www.researchpartnership.com/news/2016/04/video-storytelling-in-healthcare-market-research/),

use graphic recording or sketch noting in the presentation of research results (https://www.explainvisually.co),

provide short films (2-3 minutes), in which the researchers and respondents talk about the key findings and recommendations, encouraging recipients to read the reports,

animations showing the most important findings in the form of drawings. In this case, we recommend a storytelling research scenario (https://www.explainvisually.co/portfolio-posts/european-commission-humanitarian-aid-civil-protection-echo-worldwaterday2015-part-1/).

Additionally, we propose a series of training sessions / webinars on communicating research results. The aim of the training could be to improve the professionalism and

60

Page 61:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

comprehensibility of the reports prepared. The scope of the training would cover the following topics:

selection of graphic media (tables, graphs, fonts), creating visualizations, reports, dashboards, applying storytelling and info graphics in the communication of research findings.

In recent years there has been an increase in the publication of very useful and practical sources (books and webinars of the American Evaluation Association) (Evergreen, 2016; Hutchinson, 2017) Thus, there is a good knowledge base available for training and own experiments.

Summing up, the discussed list of steps is a starting point for a conversation about ideas that need, if accepted further development and testing. Developing these ideas further, including a toolbox for making knowledge easy, could be a joint activity of country representatives during the Cohesion Policy evaluation network run by the European Commission.

We hope that current evaluation practice in Cohesion Policy and our conversation about moving things forward can become the foundation and inspiration for Evidence-Informed Policies and a Better Regulation Movement across other EC and national polices.

61

Page 62:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

REFERENCES

European Commission (2014) The programming period 2014-2020. Guidance document on monitoring and evaluation, European Cohesion Fund, European Regional Development Funds, Concepts and Recommendations, March 2014, available online at http://ec.europa.eu/regional_policy/sources/docoffic/2014/working/wd_2014_en.pdf, page 4).

European Commission (2017), COMMISSION STAFF WORKING DOCUMENT 2017, Better Regulation Guidelines.

Evergreen, S. & Metzner, C. (2013). “Design Principles for Data Visualization in Evaluation”. New Directions for Evaluation, 140(Winter), 5-20.

Evergreen, S. (2016). Effective Data Visualization: The Right Chart for the Right Data. Thousand Oaks CA: Sage.

Fratesi U., Wishlade F., (2017), The impact of European Cohesion Policy in different contexts, Regional Studies Vol. 51, 2017 - Issue 6: Theme issue: European Cohesion Policy in context

Hatry, H. & Davies, E. (2011). A Guide to Data-Driven Performance Reviews, Washington D.C.: IBM Center for The Business of Government.

Haynes, L., Service, O. & Goldacre, B. (2012). Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials, London: UK Cabinet Office. Behavioural Insight Team.

Hutchinson , K. 92017). A Short Primer on Innovative Evaluation Reporting, Gibson Canada: Community Solutions.

IDEO (2012). Design Thinking for Educators, Stanford: IDEO

Kimbell, L. (2015). The Service Innovation Handbook: Action-oriented Creative Thinking Toolkit for Service Organizations. Amsterdam: BIS Publishers.

Kumar, V. (2012). 101 Design Methods: A Structured Approach for Driving Innovation in Your Organization. Hoboken, New Jersey: Wiley.

Kupiec T. (2015b). Program evaluation use and its mechanisms: The case of cohesion policy in Polish regional administration. Zarządzanie Publiczne, 33, 67–83.

Kupiec, T. (2014) Evaluation practice of Regional Operational Programmes in Poland. Management and Business Administration. Central Europe, 2014/3, 135-151.

Kupiec, T. (2015a), Ewaluacja regionalnych programów operacyjnych w warunkach prawa zamówień publicznych i finansów publicznych, Samorząd Terytorialny, 10/2015, s. 27-39.

Liedtka, J. & Ogilvie, T. (2011). Designing for Growth: A Design Thinking Tool Kit for Managers. New York, Chichester, West Sussex: Columbia University Press.

62

Page 63:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

Miller, R.L. (2015). “How People Judge the Credibility of Information”; in: Donaldson, S.I., Christie, C.A. & Mark, M.M. (ed.) Credible and Actionable Evidence. The Foundation for Rigorous and Influential Evaluations, p.39-61. Thousand Oaks: SAGE.

Nutley, S., Walter, I. & Davies, H.T.O. (2003). “From Knowing to Doing. A Framework for Understanding the Evidence-Into-Practice Agenda”. Evaluation, 9(2), 125-148.

Olejniczak, K., Kupiec, T., Newcomer, K. (in review). Learning from evaluation - the knowledge users' perspective. Evaluační teorie a praxe.

Olejniczak, K., Raimondo, E. & Kupiec, T. (2016). “Evaluation units as knowledge brokers: Testing and calibrating an innovative framework”. Evaluation, 22(2), 168-189.

Olejniczak, K., Strzęboszewski, P., Bienias, S. (2012). Evaluation of Cohesion Policy Overview of practices. Warsaw: Ministry of Regional Development.

Prewitt, K., Schwandt, T. & Straf, M. (2012). (ed.) Using Science and Evidence in Public Policy. The National Academies Press, Washington DC.

Regulation (EU) No 1303/2013 of the European Parliament and of the Council of 17 December 2013 laying down common provisions on the European Regional Development Fund, the European Social Fund, the Cohesion Fund, the European Agricultural Fund for Rural Development and the European Maritime and Fisheries Fund and laying down general provisions on the European Regional Development Fund, the European Social Fund, the Cohesion Fund and the European Maritime and Fisheries Fund and repealing Council Regulation (EC) No 1083/2006.

Stickdorn, M. & Schneider, J. (2012). This is Service Design Thinking: Basics, Tools, Cases. Hoboken, NJ: Wiley.

US Commission on Evidence-Based Policymaking (2017). The promise of Evidence-Based Policymaking, Washington D.C.: Commission on Evidence-Based Policymaking.

Weiss, C.H. & Bucuvalas, M.J. (1980). “Truth Tests and Utility Tests: decision-makers’ frame of reference for social science research”. American Sociological Review, 45(2), 302-313.

Weiss, C.H. (1977). (ed.) Using social research in public policy making. Lexington Books, Lexington, MA.

Weiss, C.H. (1988). “Evaluation for decisions: is anybody there? Does anybody care?”. Evaluation Practice, 9(1), 5-19.

63

Page 64:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

LIST OF TABLES AND FIGURES

Tables

Table 1 An overview of CP implementation systems............................................................................18Table 2 Types of evaluation systems adopted in V4+4 countries.........................................................21

Figures

Figure 1 The universal logic of policy delivery........................................................................................9Figure 2 Centralization of evaluation systems in V4+4 countries with regard for the number of OPs

implemented....................................................................................................................... 19Figure 3 Tasks executed by evaluation units in V4+4 countries...........................................................23Figure 4 Share of working time spent on evaluation by the evaluation units in V4+4 countries............24Figure 5 Other tasks executed by the evaluation units in V4+4 countries............................................25Figure 6 Knowledge brokering processes............................................................................................27Figure 7 Methods for obtaining knowledge needs by evaluation units..................................................28Figure 8 Methods for acquiring knowledge by evaluation units............................................................29Figure 9 Ways of disseminating knowledge used by evaluation units..................................................33Figure 10 Strategies of dissemination used by evaluation units...........................................................34Figure 11 Whether the structure of the studies allow future comparisons and syntheses....................36Figure 12 Units organize discussions to allow sharing knowledge and experiences among employees

of the institution (wider than unit)........................................................................................36Figure 13 Collecting and making available a unit’s work in the form of a repository / database / online

platform............................................................................................................................... 37Figure 14 Capacity building activities...................................................................................................38Figure 15 Knowledge types provided according to evaluation units ant their users..............................41Figure 16 Knowledge needed and received by users from evaluation (CAWI_U)................................42Figure 17 Who the main users of evaluation units’ work are (CAWI_KB).............................................43Figure 18 Main source of knowledge about program IMPLEMENTATION (CAWI_U)..........................44Figure 19 Main source of knowledge about EFFECTS (CAWI_U).......................................................45Figure 20 Main source of knowledge about MECHANISMS (CAWI_U)...............................................46Figure 21 How capacities influence brokers’ performance...................................................................48Figure 22 How the environment influences brokers’ performance........................................................50Figure 23 The theory of change for knowledge brokers.......................................................................55Figure 24 Strategic dilemmas for evaluation........................................................................................57

64

Page 65:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

ANNEXES

Annex 1: Methodology

Survey with staff of evaluation unitsThis was conducted in the form of CAWI from June 5 to August 01. It was directed to all evaluation units in the 8 studied countries. Email contacts to evaluation units as well as the number of units appropriate for the survey was acquired from “country contacts” - people from national coordinating bodies that were involved in the research planning process. Invitation to participate in the survey was sent to a total of 80 units, and we received 74 complete responses. The details of the country cross-section is presented below.

Country Sent Received Response RateBG 7 5 71%CZ 12 11 92%HR 5 4 80%HU 1 1 100%PL 36 36 100%RO 3 3 100%SK 15 13 87%SI 1 1 100%Total 80 74 93%

Survey questionnaire:

65

Page 66:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

Questionnaire for analytical units in the Cohesion Policy implementation system

Dear Colleagues,This questionnaire is a part of a project analyzing and comparing the activities of analytical units and the context of evaluation systems in selected EU member states. The project is financed by the Polish Ministry of Development and conducted in cooperation with counterparts in Bulgaria, Croatia, Czechia, Hungary, Romania, Slovakia and Slovenia. We hope that the conclusions from this project will support your work by presenting good practices in the field of knowledge production and dissemination as well as recommendations on how the evaluation system should be designed on the EU and national level to enhance your work.The questionnaire is structured in 8 short blocks, and it should not take you more than 20 minutes to complete it.In case of any questions, please contact the project coordinator Karol Olejniczak - [email protected] or Tomasz Kupiec [email protected] you for your contribution!

Introduction to section

In this section you will provide brief characteristics of your unit.Answering all questions in this section, please refer to the past 1.5 years (January 2016-present day)

1 S1_1 Your unit is located in: single choice

S1_1_a central institution

S1_1_b regional / local institution

S1_1_c other:……..

2 S1_2Was evaluation the only responsibility of your unit in the past 1.5 years (January 2016-present day)? single choice

S1_2_a yes, evaluation was the only task of our unit

S1_2_b no, our unit had also other tasks

Page 67:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

3 S1_3What tasks other than evaluation tasks did your unit perform in the past 1.5 years (January 2016-present day)? Multiple choice

S1_3_aconducting other types of analytical work (e.g. expert studies, analyses, reviews, regulatory impact assessment)

S1_3_b monitoring (managing system of indicators, measuring them, reporting)

S1_3_c programming (e.g. setting strategy, defining indicators, etc.)

S1_3_d program implementation (e.g. project selection, controls, payments, certification)

S1_3_e information & communication

S1_3_f other……….

S1_4What share of your working time did your unit spend on evaluation in the past 1.5 years (January 2016-present day)?

S1_4_a evaluation

S1_4_b other tasks

NEW PAGE

Introduction to section

This section focuses on how you (your unit) identify knowledge needs.Please assess the frequency of each action listed below in the last 1.5 years (from January 2016 until May 2017).

S2_F1_1In our unit to identify knowledge needs of our organization and of other intended knowledge users:

S2_F1_1_aWe wait for other units in our institution and our supervisors to tell us what they need to know.

Likert 1-5. 1=very rarely, 5=very often

67

Page 68:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

S2_F1_1_bWe formulate written inquiries to other units in our institution asking what they need to know.

Likert 1-5. 1=very rarely, 5=very often

S2_F1_1_cWe participate in various meetings in our department and try to identify issues that should be evaluated.

Likert 1-5. 1=very rarely, 5=very often

S2_F1_1_dWe participate in various meetings of people from other departments and/or institutions and interpret discussion topics in terms of information needs.

Likert 1-5. 1=very rarely, 5=very often

S2_F1_1_e We follow ongoing scientific and expert debates in the field related to our program.Likert 1-5. 1=very rarely, 5=very often

S2_F1_1_fWe follow ongoing political and public debates and interpret them in terms of the evaluation subject.

Likert 1-5. 1=very rarely, 5=very often

S2_F1_1_gWe observe the program / policy implementation process and its challenges from the perspective of information needs.

Likert 1-5. 1=very rarely, 5=very often

S2_F1_1_hWe draw inspiration from studies conducted by other entities (analytical units, academia, etc.).

Likert 1-5. 1=very rarely, 5=very often

S2_F1_2We actively cooperate with other units in the process of preparing the final study / research proposal to make sure it fits users’ needs

Likert 1-5. 1=very rarely, 5=very often

S2_F1_3We complement proposals submitted by other units / supervisors with additional study areas to create a logical research structure

Likert 1-5. 1=very rarely, 5=very often

S2_F1_4We translate proposals submitted by other units / supervisors into a language of goals and research questions understandable for researchers / contractors.

Likert 1-5. 1=very rarely, 5=very often

S2_F1_5In the case of an excessive number of needs, we select the most important, in our opinion, research questions / areas.

Likert 1-5. 1=very rarely, 5=very often

NEW PAGE

Introduction to section

This section focuses on how you (your unit) acquire knowledge.

S2_f2_1Please assess the frequency of each action listed below in the last 1.5 years (from January 2016 until May 2017).

S2_F2_1_a We contract out (commission) various studies (evaluations, expert studies, analyses)Likert 1-5. 1=very rarely, 5=very often

68

Page 69:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

S2_F2_1_bWe conduct ourselves (with our own team) various studies (evaluations, expert studies, analyses)

Likert 1-5. 1=very rarely, 5=very often

S2_F2_1_c We conduct systematic reviews (summaries of existing studies)Likert 1-5. 1=very rarely, 5=very often

S2_F2_1_d We conduct rapid reviews to satisfy urgent information needsLikert 1-5. 1=very rarely, 5=very often

S2_F2_1_e We process and interpret monitoring dataLikert 1-5. 1=very rarely, 5=very often

S2_F2_1_f We formulate short inquiries to expertsLikert 1-5. 1=very rarely, 5=very often

S2_F2_1_g We process and analyze public statisticsLikert 1-5. 1=very rarely, 5=very often

S2_F2_2We verify information needs submitted to us in terms of already available knowledge (studies, reports, analyses).

Likert 1-5. 1=very rarely, 5=very often.

S2_F2_3 We involve users of our studies in the research process and commenting on its results.Likert 1-5. 1=very rarely, 5=very often

S2_F2_4 We contract / hire external experts to review the quality of commissioned studiesLikert 1-5. 1=very rarely, 5=very often

S2_F2_5We have a procedure / tool to verify the quality of commissioned studies internally by our team yes/no

NEW PAGE

Introduction to section

This section focuses on how you (your unit) disseminate the results of your analytical work. Please refer to all analytical work /studies that you (your unit) have conducted or commissioned in the last 1.5 years (from January 2016 until May 2017).

S2_F3_1Please assess the frequency of each action listed below.

69

Page 70:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

S2_F3_1_a We send reports by email to intended users.Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_b We publish the results of our studies online.Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_c We publish a printed version of our reports.Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_d We provide an executive summary, memos.Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_e We provide posters.Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_f We encourage the media to publish press articles.Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_g We prepare video presentations and animations.Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_h We prepare infographics.Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_i We share information through social media.Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_jWe organize a presentation of findings and discussion between the study contractors and intended users.

Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_kWe organize a conference for a wider audience (at least 5 institutions represented, usually not only direct users of the study).

Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_l We use opinion leaders to disseminate the conclusions from our studies.Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_m We meet personally with the intended users to discuss the conclusions from our studies.Likert 1-5. 1=very rarely, 5=very often

S2_F3_1_mOther (please indicate any other form of disseminating study results you have exercised and its frequency):……………………….

Likert 1-5. 1=very rarely, 5=very often

S2_F3_2 We prepare the strategy of dissemination before the study startsLikert 1-5. 1=very rarely, 5=very often

70

Page 71:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

S2_F3_3We differentiate communication tools and channels depending on the type of audience / receiver of the report

Likert 1-5. 1=very rarely, 5=very often

S2_F3_4 We adjust study timing to deliver knowledge right on timeLikert 1-5. 1=very rarely, 5=very often

S2_F3_5After a study is completed we organize a discussion with relevant actors about its conclusions and how they could be used by our institution

Likert 1-5. 1=very rarely, 5=very often

NEW PAGE

Introduction to section

This section focuses on how you (your unit) accumulate and enhance the use of results from your analytical work.Please refer to all studies and reviews that you (your unit) have conducted or commissioned in the last 1.5 years (from January 2016 until May 2017).

S2_F4_1 Our studies are structured in a way allowing future comparisons and synthesesLikert 1-5. 1=very rarely, 5=very often

S2_F4_2 We collect the results of our studies in a database / online platform accessible to: single choice

S2_F4_2_a only employees of our institution

S2_F4_2_b only employees of institutions forming the Cohesion Policy implementation system

S2_F4_2_c the general public

S2_F4_2_d we do not collect and provide results of studies on any database / platform

S2_F4_3We organize discussions to allow sharing of knowledge and experiences between employees of our institution

Likert 1-5. 1=very rarely, 5=very often

NEW PAGEIntroduction to section

This section focuses on how you (your unit) accumulate and enhance use of results of your analytical work.Please refer to your (your unit) activity in the last 1,5 year (from January 2016 until May

71

Page 72:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

2017).

S2_F5 Please assess the frequency of each action listed below.

S2_F5_1We participate in appropriate conferences. Likert 1-5. 1=very rarely, 5=very

often

S2_F5_2We participate in the meetings of national thematic groups. Likert 1-5. 1=very rarely, 5=very

often

S2_F5_3We participate in meetings of European networks. Likert 1-5. 1=very rarely, 5=very

often

S2_F5_4We are involved in appropriate thematic societies. Likert 1-5. 1=very rarely, 5=very

often

S2_F5_5We exchange experience with analytical units in other institutions. Likert 1-5. 1=very rarely, 5=very

often

S2_F5_6We cooperate with experts, academics (in other ways than contracting out studies). Likert 1-5. 1=very rarely, 5=very

often

S2_F6_1

We organize activities (e.g. training session, conference, publication) for employees of our institution to raise awareness about the necessity of supporting decisions with knowledge (including evaluation) in public management.

Likert 1-5. 1=very rarely, 5=very often

S2_F6_2

We organize at least one activity (e.g. training session, conference, publication) for other institutions and the general public to raise awareness about the necessity of supporting decisions with knowledge (including evaluation) in public management.

Likert 1-5. 1=very rarely, 5=very often

S2_F5_7 We employ scientists who are also active in academia or other research institutions. yes/no

S2_F5_8 We organize internships for doctoral students to build future networks with academia. yes/no

S2_F6_3In our institution we established and follow rules on implementation of recommendations from our studies. yes/no

NEW PAGE

Introduction to section

This section focuses on the users of your (your unit’s) work and the types of knowledge you provided in the last 1.5 years (from January 2016 until May 2017).

72

Page 73:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

S3_1 We conduct our analytical work/studies for:

S3_1_b managers of other units in our institutionsLikert 1-5. 1=strongly disagree, 5= strongly agree

S3_1_c our supervisors (e.g. department director)Likert 1-5. 1=strongly disagree, 5= strongly agree

S3_1_d our political leaders (e.g. minister)Likert 1-5. 1=strongly disagree, 5= strongly agree

S3_1_e other institutions in the cohesion policy implementation systemLikert 1-5. 1=strongly disagree, 5= strongly agree

S3_1_f institutions at EU levelLikert 1-5. 1=strongly disagree, 5= strongly agree

S3_1_g public institutions dealing with other policiesLikert 1-5. 1=strongly disagree, 5= strongly agree

S3_1_h the media & general publicLikert 1-5. 1=strongly disagree, 5= strongly agree

S3_2 Our analytical work / studies:

S3_2_aprovide information on quality of implementation procedures, activities and ongoing processes, problems in processes and ways of solving them

Likert 1-5. 1=very rarely, 5=very often

S3_2_bexplain what the effects of program / intervention are, provide evidence on what policy approaches work, what solutions and strategies have produced desired outcomes

Likert 1-5. 1=very rarely, 5=very often

S3_2_cexplain why things have worked (or not), how beneficiaries responded to the program and what factors caused the observable outcomes as well as side effects

Likert 1-5. 1=very rarely, 5=very often

NEW PAGE

Introduction to section

This section focuses on the main factors that have influenced your (your unit’s) work in the last 1.5 years (from January 2016 until May 2017).

S4_1Please read the following list of factors that could have influenced your (your unit) work. Rate them according to the strength of that influence.

73

Page 74:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

S4_1_a number of staffLikert 1-5. 1=no impact, 5=great impact

S4_1_b skills & experience of staffLikert 1-5. 1=no impact, 5=great impact

S4_1_c available budget (excluding salaries)Likert 1-5. 1=no impact, 5=great impact

S4_1_d available timeLikert 1-5. 1=no impact, 5=great impact

S4_1_e tools available to your unit (e.g. IT, library resources)Likert 1-5. 1=no impact, 5=great impact y

S4_1_erules available to your unit (e.g. routines applied in the unit, procedures for contracting studies)

Likert 1-5. 1=no impact, 5=great impact y

S4_1_f position of your unit in the organizational structureLikert 1-5. 1=no impact, 5=great impact

S4_1_hdecision making style of users of your knowledge (the extent to which they support their decision with research evidence)

Likert 1-5. 1=no impact, 5=great impact

S4_1_i your users’ attitude toward evaluation (e.g. do they find it reliable, useful)Likert 1-5. 1=no impact, 5=great impact

S4_1_jnumber and capacity of the external producers of knowledge (researchers, experts, consultancies)

Likert 1-5. 1=no impact, 5=great impact

S4_1_k

administrative culture of your country in relation to the use of knowledge-based evidence in policy process; availability of practices of evidence-based decision-making in domestic policies

Likert 1-5. 1=no impact, 5=great impact

S4_1_lEU context - regulations of the Cohesion Policy and requirement of the European Commission about evaluation scope, timing and process

Likert 1-5. 1=no impact, 5=great impact

S4_1_mDomestic legal context - regulations (e.g. public procurement regulations, public finances regulations)

Likert 1-5. 1=no impact, 5=great impact

S4_2Please read again the same list of factors that could have influenced your (your unit) work. Rate them according to the TYPE of that influence.

S4_1_a number of staffLikert 1-5. 1=negative impact, 5=positive impact

74

Page 75:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

S4_1_b skills & experience of staff

Likert 1-5. 1=negative impact, 5=positive impact

S4_1_c available budget (excluding salaries)

Likert 1-5. 1=negative impact, 5=positive impact

S4_1_d available time

Likert 1-5. 1=negative impact, 5=positive impact

S4_1_e tools available to your unit (e.g. IT, library resources)

Likert 1-5. 1=negative impact, 5=positive impact

S4_1_erules available to your unit (e.g. routines applied in unit, procedures for contracting studies)

Likert 1-5. 1=negative impact, 5=positive impact

S4_1_f position of your unit in the organizational structure

Likert 1-5. 1=negative impact, 5=positive impact

S4_1_hdecision making style of users of your knowledge (the extent to which they support their decision with research evidence)

Likert 1-5. 1=negative impact, 5=positive impact

S4_1_i your users’ attitude toward evaluation (e.g. do they find it reliable, useful)

Likert 1-5. 1=negative impact, 5=positive impact

S4_1_jnumber and capacity of the external producers of knowledge (researchers, experts, consultancies)

Likert 1-5. 1=negative impact, 5=positive impact

S4_1_k

administrative culture of your country in relation to the use of knowledge-based evidence in policy process; availability of practices of evidence-based decision-making in domestic policies

Likert 1-5. 1=negative impact, 5=positive impact

S4_1_lEU context - regulations of the Cohesion Policy and requirement of the European Commission about evaluation scope, timing and process

Likert 1-5. 1=negative impact, 5=positive impact

S4_1_mDomestic legal context - regulations (e.g. public procurement regulations, public finances regulations)

Likert 1-5. 1=no impact, 5=great impact

75

Page 76:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

NEW PAGE

Introduction to section In this final section you will provide additional characteristics of your unit.

S5_1

How many people are employed in your unit (1) please include all employees, even if their responsibilities include more than just processing knowledge inquiries, 2) in the case of fluctuations please indicate the highest number of personnel in the period January 2016-present day)

drop down menu from 1 to 10 + "more than 10" with incrementation of 1

S5_2How many evaluation studies has your unit CONTRACTED OUT (January 2016-present day)

drop down menu from 0 to 9 + "more than 9" with incrementation of 1

S5_3How many evaluation studies has your unit CONDUCTED ITSELF (January 2016-present day)

drop down menu from 0 to 9 + "more than 9" with incrementation of 1

S5_4How much money has your unit spent on commissioning evaluation studies (January 2016-present day).

drop down menu from 0 to 500 000€ + "more than 500 000€" with incrementation of 50 000€

S5_4

How much money has your unit spent on other evaluation activities (January 2016-present day) excluding personnel costs (e.g. organizing conferences, training sessions, publications, etc.)

drop down menu from 0 to 500 000€ + "more than 500 000€" with incrementation of 50 000€

76

Page 77:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

Survey with intended evaluation users This was conducted in the form of CAWI from August 29 to October 5. It was directed to potential users of evaluative work conducted by evaluation units. A link to the survey was sent to evaluation units who were requested to identify their key users and forward the link to them. Therefore we cannot estimate the population. However we received 228 complete responses. The details of the country cross-section is presented below.

Country ResponsesBulgaria 10Croatia 42Czech Republic 8Hungary 1Poland 80Romania 58Slovakia 23Slovenia 6Total 228

Survey questionnaire:

Dear Madam / Sir,This questionnaire is a part of a project analyzing and comparing Cohesion Policy evaluation systems in selected EU member states. The project is financed by the Polish Ministry of Development and conducted in cooperation with counterparts in Bulgaria, Croatia, Czechia, Hungary, Romania, Slovakia and Slovenia.You have received a link to the questionnaire from the representative of the evaluation unit who consider you the user of their evaluation reports. The questionnaire contains 6 questions and aims to find out how and to what extent evaluation is useful for you in your work. It will take you less than 5 minutes to fill it in.The conclusions from this project will help evaluation units in your country to make their work more relevant to your information needs.In case of any questions, please contact the project coordinators Karol Olejniczak - [email protected] or Tomasz Kupiec [email protected] you for your contribution!

1

Please tell us what is the main source of KNOWLEDGE ABOUT PROGRAM IMPLEMENTATION for your department / team.

We mean knowledge about the quality of implementation procedures, activities and ongoing processes, problems in processes and ways of solving them.

Likert 1-5. 1=strongly disagree, 5=strongly agree

Rotation of items needed

1.1 Conclusions from evaluation studies.

1.2 Information from monitoring of physical progress.

1.3 Information from monitoring of financial progress.1.4 Results of project controls.1.5 Results of external institutions' controls.

Page 78:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

1.6 Own experience, discussion with colleagues from the team.1.7 Training sessions, postgraduate studies.1.8 Conferences related to the area of our work.

1.9 Current contacts with program beneficiaries, applicants.

1.10Cooperation with other domestic entities in the Cohesion Policy implementation system.

1.11Cooperation with other foreign entities in the Cohesion Policy implementation system.

1.12Cooperation with other entities outside the Cohesion Policy implementation system.

1.13 Media news.

1.14 Scientific literature (books, journals).

1.15Conclusions from analyses, research conducted or commissioned by other institutions.

2

Please tell us what is the main source of KNOWLEDGE ABOUT EFFECTS for your department / team.

We mean knowledge about evidence of what policy approaches work, what solutions and strategies have produced desired outcomes.

Likert 1-5. 1=strongly disagree, 5=strongly agree

Rotation of items needed

2.1 Conclusions from evaluation studies.2.2 Information from monitoring of physical progress.2.3 Information from monitoring of financial progress.2.4 Results of project controls.2.5 Results of external institutions' controls.2.6 Own experience, discussion with colleagues from the team.2.7 Training sessions, postgraduate studies.2.8 Conferences related to the area of our work.

2.9 Current contacts with program beneficiaries, applicants.

2.10Cooperation with other domestic entities in the Cohesion Policy implementation system.

2.11Cooperation with other foreign entities in the Cohesion Policy implementation system.

2.12Cooperation with other entities outside the Cohesion Policy implementation system.

2.13 Media news.

2.14 Scientific literature (books, journals).

2.15Conclusions from analyses, research conducted or commissioned by other institutions.

78

Page 79:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

3

Please tell us what is the main source of KNOWLEDGE ABOUT CHANGE MECHANISMS of programs for your department / team.

We mean knowledge about explanations why things have worked (or not), how beneficiaries responded to the program, and what factors caused the observable outcomes as well as side effects.

Likert 1-5. 1=strongly disagree, 5=strongly agree

Rotation of items needed3.1 Conclusions from evaluation studies.3.2 Information from monitoring of physical progress.3.3 Information from monitoring of financial progress.3.4 Results of project controls.3.5 Results of external institutions' controls.3.6 Own experience, discussion with colleagues from the team.3.7 Training sessions, postgraduate studies.3.8 Conferences related to the area of our work.

3.9 Current contacts with program beneficiaries, applicants.

3.10Cooperation with other domestic entities in the Cohesion Policy implementation system.

3.11Cooperation with other foreign entities in the Cohesion Policy implementation system.

3.12Cooperation with other entities outside the Cohesion Policy implementation system.

3.13 Media news.3.14 Scientific literature (books, journals).

3.15Conclusions from analyses, research conducted or commissioned by other institutions.

4 In order to make good decisions in our unit we need more…

Likert 1-5. 1=strongly disagree, 5=strongly agree

4.1

knowledge about the implementation PROCESS – the quality of implementation procedures, activities and ongoing processes, problems in processes and ways of solving them.

4.2

knowledge about EFFECTS - evidence on what policy approaches work, what solutions and strategies have produced desired outcomes

4.3

knowledge about change MECHANISMS - explanations why things have worked (or not), how beneficiaries responded to the program and what factors caused the observable outcomes as well as side effects

5Have you read any evaluation reports in the last 1.5 years (January 2016-present day)? Single choice

5.1 Yes

5.2 No

79

Page 80:   · Web viewThe goal of this study is to help evaluation units in their evolution from mere producers of isolated reports into real knowledge brokers. Brokers are the animators

FINAL REPORTEvaluation in V4+4 countries - overview of practicesEGO s.c.: www.evaluation.pl

6 (if 5=yes)

Evaluation report(s) I read in last 1.5 years (January 2016-present day) offered knowledge about:

Likert 1-5. 1=strongly disagree, 5=strongly agree

6.1The implementation process - technical, operational issues, the quality of implementation procedures, activities and processes.

6.2effects - evidence on what policy approaches work, what solutions and strategies have produced desired outcomes

6.3

change mechanisms - insight into why things work in a certain way, the causal mechanisms that lead to desired outcomes as well as side effects

80